score
float64
4
5.34
text
stringlengths
256
572k
url
stringlengths
15
373
4.03125
Food preservation involves preventing the growth of bacteria, fungi (such as yeasts), or other micro-organisms (although some methods work by introducing benign bacteria or fungi to the food), as well as retarding the oxidation of fats that cause rancidity. Food preservation may also include processes that inhibit visual deterioration, such as the enzymatic browning reaction in apples after they are cut during food preparation. Many processes designed to preserve food will involve a number of food preservation methods. Preserving fruit by turning it into jam, for example, involves boiling (to reduce the fruit’s moisture content and to kill bacteria, etc.), sugaring (to prevent their re-growth) and sealing within an airtight jar (to prevent recontamination). Some traditional methods of preserving food have been shown to have a lower energy input and carbon footprint, when compared to modern methods. However, some methods of food preservation are known to create carcinogens, and in 2015, the International Agency for Research on Cancer of the World Health Organization classified processed meat, i.e. meat that has undergone salting, curing, fermenting, and smoking, as "carcinogenic to humans". Maintaining or creating nutritional value, texture and flavor is an important aspect of food preservation, although, historically, some methods drastically altered the character of the food being preserved. In many cases these changes have come to be seen as desirable qualities – cheese, yogurt and pickled onions being common examples. - 1 Traditional techniques - 2 Curing - 3 Industrial/modern techniques - 4 See also - 5 Notes - 6 References - 7 External links Drying is one of the oldest techniques used to hamper the decomposition of food products. As early as 12,000 B.C., Middle Eastern and Oriental cultures were drying foods using the power of the sun. Vegetables and fruit are naturally dried by the sun and wind, but "still houses" were built in areas that did not have enough sunlight to dry things. A fire would be built inside the building to provide the heat to dry the various fruits, vegetables, and herbs. Cooling preserves foods by slowing down the growth and reproduction of micro-organisms and the action of enzymes that cause food to rot. The introduction of commercial and domestic refrigerators drastically improved the diets of many in the Western world by allowing foods such as fresh fruit, salads and dairy products to be stored safely for longer periods, particularly during warm weather. Freezing is also one of the most commonly used processes, both commercially and domestically, for preserving a very wide range of foods, including prepared foods that would not have required freezing in their unprepared state. For example, potato waffles are stored in the freezer, but potatoes themselves require only a cool dark place to ensure many months' storage. Cold stores provide large-volume, long-term storage for strategic food stocks held in case of national emergency in many countries. Heating to temperatures which are sufficient to kill microorganisms inside the food is a method used with perpetual stews. Milk is also boiled before storing to kill many microorganisms. Salting or curing draws moisture from the meat through a process of osmosis. Meat is cured with salt or sugar, or a combination of the two. Nitrates and nitrites are also often used to cure meat and contribute the characteristic pink color, as well as inhibition of Clostridium botulinum. It was a main method of preservation in medieval times and around the 1700s. The earliest cultures have used sugar as a preservative, and it was commonplace to store fruit in honey. Similar to pickled foods, sugar cane was brought to Europe through the trade routes. In northern climates without sufficient sun to dry foods, preserves are made by heating the fruit with sugar. "Sugar tends to draw water from the microbes (plasmolysis). This process leaves the microbial cells dehydrated, thus killing them. In this way, the food will remain safe from microbial spoilage." Sugar is used to preserve fruits, either in an anti-microbial syrup with fruit such as apples, pears, peaches, apricots and plums, or in crystallized form where the preserved material is cooked in sugar to the point of crystallization and the resultant product is then stored dry. This method is used for the skins of citrus fruit (candied peel), angelica and ginger. Also sugaring can be used in jam jellies. Smoking is used to lengthen the shelf life of perishable food items. This effect is achieved by exposing the food to smoke from burning plant materials such as wood. Smoke deposits a number of pyrolysis products onto the food, including the phenols syringol, guaiacol and catechol. These compounds aid in the drying and preservation of meats and other foods. Most commonly subjected to this method of food preservation are meats and fish that have undergone curing. Fruits and vegetables like paprika, cheeses, spices, and ingredients for making drinks such as malt and tea leaves are also smoked, but mainly for cooking or flavoring them. It is one of the oldest food preservation methods, which probably arose after the development of cooking with fire. Pickling is a method of preserving food in an edible anti-microbial liquid. Pickling can be broadly classified into two categories: chemical pickling and fermentation pickling. In chemical pickling, the food is placed in an edible liquid that inhibits or kills bacteria and other micro-organisms. Typical pickling agents include brine (high in salt), vinegar, alcohol, and vegetable oil, especially olive oil but also many other oils. Many chemical pickling processes also involve heating or boiling so that the food being preserved becomes saturated with the pickling agent. Common chemically pickled foods include cucumbers, peppers, corned beef, herring, and eggs, as well as mixed vegetables such as piccalilli. In fermentation pickling, the food itself produces the preservation agent, typically by a process that produces lactic acid. Fermented pickles include sauerkraut, nukazuke, kimchi, surströmming, and curtido. Some pickled cucumbers are also fermented. Sodium hydroxide (lye) makes food too alkaline for bacterial growth. Lye will saponify fats in the food, which will change its flavor and texture. Lutefisk uses lye in its preparation, as do some olive recipes. Modern recipes for century eggs also call for lye. Canning involves cooking food, sealing it in sterile cans or jars, and boiling the containers to kill or weaken any remaining bacteria as a form of sterilization. It was invented by the French confectioner Nicolas Appert. By 1806, this process was used by the French Navy to preserve meat, fruit, vegetables, and even milk. Although Appert had discovered a new way of preservation, it wasn't understood until 1864 when Louis Pasteur found the relationship between microorganisms, food spoilage, and illness. Foods have varying degrees of natural protection against spoilage and may require that the final step occur in a pressure cooker. High-acid fruits like strawberries require no preservatives to can and only a short boiling cycle, whereas marginal vegetables such as carrots require longer boiling and addition of other acidic elements. Low-acid foods, such as vegetables and meats, require pressure canning. Food preserved by canning or bottling is at immediate risk of spoilage once the can or bottle has been opened. Lack of quality control in the canning process may allow ingress of water or micro-organisms. Most such failures are rapidly detected as decomposition within the can causes gas production and the can will swell or burst. However, there have been examples of poor manufacture (underprocessing) and poor hygiene allowing contamination of canned food by the obligate anaerobe Clostridium botulinum, which produces an acute toxin within the food, leading to severe illness or death. This organism produces no gas or obvious taste and remains undetected by taste or smell. Its toxin is denatured by cooking, however. Cooked mushrooms, handled poorly and then canned, can support the growth of Staphylococcus aureus, which produces a toxin that is not destroyed by canning or subsequent reheating. Food may be preserved by cooking in a material that solidifies to form a gel. Such materials include gelatin, agar, maize flour, and arrowroot flour. Some foods naturally form a protein gel when cooked, such as eels and elvers, and sipunculid worms, which are a delicacy in Xiamen, in the Fujian province of the People's Republic of China. Jellied eels are a delicacy in the East End of London, where they are eaten with mashed potatoes. Potted meats in aspic (a gel made from gelatine and clarified meat broth) were a common way of serving meat off-cuts in the UK until the 1950s. Many jugged meats are also jellied. Meat can be preserved by jugging. Jugging is the process of stewing the meat (commonly game or fish) in a covered earthenware jug or casserole. The animal to be jugged is usually cut into pieces, placed into a tightly-sealed jug with brine or gravy, and stewed. Red wine and/or the animal's own blood is sometimes added to the cooking liquid. Jugging was a popular method of preserving meat up until the middle of the 20th century. Burial of food can preserve it due to a variety of factors: lack of light, lack of oxygen, cool temperatures, pH level, or desiccants in the soil. Burial may be combined with other methods such as salting or fermentation. Most foods can be preserved in soil that is very dry and salty (thus a desiccant) such as sand, or soil that is frozen. Many root vegetables are very resistant to spoilage and require no other preservation than storage in cool dark conditions, for example by burial in the ground, such as in a storage clamp. Century eggs are created by placing eggs in alkaline mud (or other alkaline substance), resulting in their "inorganic" fermentation through raised pH instead of spoiling. The fermentation preserves them and breaks down some of the complex, less flavorful proteins and fats into simpler, more flavorful ones. Cabbage was traditionally buried in the fall in northern farms in the U.S. for preservation. Some methods keep it crispy while other methods produce sauerkraut. A similar process is used in the traditional production of kimchi. Sometimes meat is buried under conditions that cause preservation. If buried on hot coals or ashes, the heat can kill pathogens, the dry ash can desiccate, and the earth can block oxygen and further contamination. If buried where the earth is very cold, the earth acts like a refrigerator. In Orissa, India, it is practical to store rice by burying it underground. This method helps to store for three to six months during the dry season. The earliest form of curing was dehydration. To accelerate this process, salt is usually added. In the culinary world, it was common to choose raw salts from various sources (rock salt, sea salt, etc.). More modern "examples of salts that are used as preservatives include sodium chloride (NaCl), sodium nitrate (NaNO3) and sodium nitrite (NaNO2). Even at mild concentrations (up to 2%), sodium chloride, found in many food products, is capable of neutralizing the antimicrobial character of natural compounds." Some foods, such as many cheeses, wines, and beers, use specific micro-organisms that combat spoilage from other less-benign organisms. These micro-organisms keep pathogens in check by creating an environment toxic for themselves and other micro-organisms by producing acid or alcohol. Methods of fermentation include, but are not limited to, starter micro-organisms, salt, hops, controlled (usually cool) temperatures and controlled (usually low) levels of oxygen. These methods are used to create the specific controlled conditions that will support the desirable organisms that produce food fit for human consumption. Fermentation is the microbial conversion of starch and sugars into alcohol. Not only can fermentation produce alcohol, but it can also be a valuable preservation technique. Fermentation can also make foods more nutritious and palatable. For example, drinking water in the Middle Ages was dangerous because it often contained pathogens that could spread disease. When the water is made into beer, the resulting alcohol kills any bacteria in the water that could make people sick. Additionally, the water now has the nutrients from the barley and other ingredients, and the microorganisms can also produce vitamins as they ferment. Techniques of food preservation were developed in research laboratories for commercial applications. Pasteurization is a process for preservation of liquid food. It was originally applied to combat the souring of young local wines. Today, the process is mainly applied to dairy products. In this method, milk is heated at about 70 °C for 15 to 30 seconds to kill the bacteria present in it and cooling it quickly to 10 °C to prevent the remaining bacteria from growing. The milk is then stored in sterilized bottles or pouches in cold places. This method was invented by Louis Pasteur, a French chemist, in 1862. Vacuum-packing stores food in a vacuum environment, usually in an air-tight bag or bottle. The vacuum environment strips bacteria of oxygen needed for survival. Vacuum-packing is commonly used for storing nuts to reduce loss of flavor from oxidization. A major drawback to vacuum packaging, at the consumer level, is that vacuum sealing can deform contents and rob certain foods, such as cheese, of its flavor. Artificial food additives Preservative food additives can be antimicrobial, which inhibit the growth of bacteria or fungi, including mold, or antioxidant, such as oxygen absorbers, which inhibit the oxidation of food constituents. Common antimicrobial preservatives include calcium propionate, sodium nitrate, sodium nitrite, sulfites (sulfur dioxide, sodium bisulfite, potassium hydrogen sulfite, etc.) and disodium EDTA. Antioxidants include BHA and BHT. Other preservatives include formaldehyde (usually in solution), glutaraldehyde (kills insects), ethanol, and methylchloroisothiazolinone. Irradiation of food is the exposure of food to ionizing radiation. The two types of ionizing radiation used are beta particles (high-energy electrons) and gamma rays (emitted from radioactive sources as cobalt-60 or cesium-137). Treatment effects include killing bacteria, molds, and insect pests, reducing the ripening and spoiling of fruits, and at higher doses inducing sterility. The technology may be compared to pasteurization; it is sometimes called "cold pasteurization", as the product is not heated. The irradiation process is not directly related to nuclear energy, but does use radioactive isotopes produced in nuclear reactors. Cobalt-60, for example does not occur naturally and can only be produced through neutron bombardment of cobalt-59. Ionizing radiation at high energy levels is hazardous to life (hence its usefulness in sterilisation); for this reason, irradiation facilities have a heavily shielded irradiation room where the process takes place. Radiation safety procedures are used to ensure that neither the workers in such facilities nor the environment receives any radiation dose above administrative limits. Irradiated food does not and cannot become radioactive, and national and international expert bodies have declared food irradiation as wholesome. However, the wholesomeness of consuming such food is disputed by opponents and consumer organizations. National and international expert bodies have declared food irradiation as "wholesome"; organizations of the United Nations, such as the World Health Organization and Food and Agriculture Organization, endorse food irradiation. International legislation on whether food may be irradiated or not varies worldwide from no regulation to full banning. Irradiation may allow lower-quality or contaminated foods to be rendered marketable. Approximately 500,000 tons of food items are irradiated per year worldwide in over 40 countries. These are mainly spices and condiments with an increasing segment of fresh fruit irradiated for fruit fly quarantine. Pulsed electric field electroporation Pulsed electric field (PEF) electroporation is a method for processing cells by means of brief pulses of a strong electric field. PEF holds potential as a type of low-temperature alternative pasteurization process for sterilizing food products. In PEF processing, a substance is placed between two electrodes, then the pulsed electric field is applied. The electric field enlarges the pores of the cell membranes, which kills the cells and releases their contents. PEF for food processing is a developing technology still being researched. There have been limited industrial applications of PEF processing for the pasteurization of fruit juices. To date, several PEF treated juices are available on the market in Europe. Furthermore, for several years a juice pasteurization application in the US has used PEF. For cell disintegration purposes especially potato processors show great interest in PEF technology as an efficient alternative for their preheaters. Potato applications are already operational in the US and Canada. There are also commercial PEF potato applications in various countries in Europe, as well as in Australia, India and China. Modifying atmosphere is a way to preserve food by operating on the atmosphere around it. Salad crops that are notoriously difficult to preserve are now being packaged in sealed bags with an atmosphere modified to reduce the oxygen (O2) concentration and increase the carbon dioxide (CO2) concentration. There is concern that, although salad vegetables retain their appearance and texture in such conditions, this method of preservation may not retain nutrients, especially vitamins. There are two methods for preserving grains with carbon dioxide. One method is placing a block of dry ice in the bottom and filling the can with the grain. Another method is purging the container from the bottom by gaseous carbon dioxide from a cylinder or bulk supply vessel. Nitrogen gas (N2) at concentrations of 98% or higher is also used effectively to kill insects in the grain through hypoxia. However, carbon dioxide has an advantage in this respect, as it kills organisms through hypercarbia and hypoxia (depending on concentration), but it requires concentrations of above 35%, or so. This makes carbon dioxide preferable for fumigation in situations where a hermetic seal cannot be maintained. Controlled Atmospheric Storage (CA): "CA storage is a non-chemical process. Oxygen levels in the sealed rooms are reduced, usually by the infusion of nitrogen gas, from the approximate 21 percent in the air we breathe to 1 percent or 2 percent. Temperatures are kept at a constant 0 to 2 °C (32 to 36 °F). Humidity is maintained at 95 percent and carbon dioxide levels are also controlled. Exact conditions in the rooms are set according to the apple variety. Researchers develop specific regimens for each variety to achieve the best quality. Computers help keep conditions constant." "Eastern Washington, where most of Washington’s apples are grown, has enough warehouse storage for 181 million boxes of fruit, according to a report done in 1997 by managers for the Washington State Department of Agriculture Plant Services Division. The storage capacity study shows that 67 percent of that space —enough for 121,008,000 boxes of apples — is CA storage." Air-tight storage of grains (sometimes called hermetic storage) relies on the respiration of grain, insects, and fungi that can modify the enclosed atmosphere sufficiently to control insect pests. This is a method of great antiquity, as well as having modern equivalents. The success of the method relies on having the correct mix of sealing, grain moisture, and temperature. This process subjects the surface of food to a "flame" of ionized gas molecules, such as helium or nitrogen. This causes micro-organisms to die off on the surface. High-pressure food preservation High-pressure food preservation or pascalization refers to the use of a food preservation technique that makes use of high pressure. "Pressed inside a vessel exerting 70,000 pounds per square inch (480 MPa) or more, food can be processed so that it retains its fresh appearance, flavor, texture and nutrients while disabling harmful microorganisms and slowing spoilage." By 2005, the process was being used for products ranging from orange juice to guacamole to deli meats and widely sold. Biopreservation is the use of natural or controlled microbiota or antimicrobials as a way of preserving food and extending its shelf life. Beneficial bacteria or the fermentation products produced by these bacteria are used in biopreservation to control spoilage and render pathogens inactive in food. It is a benign ecological approach which is gaining increasing attention. Of special interest are lactic acid bacteria (LAB). Lactic acid bacteria have antagonistic properties that make them particularly useful as biopreservatives. When LABs compete for nutrients, their metabolites often include active antimicrobials such as lactic acid, acetic acid, hydrogen peroxide, and peptide bacteriocins. Some LABs produce the antimicrobial nisin, which is a particularly effective preservative. These days, LAB bacteriocins are used as an integral part of hurdle technology. Using them in combination with other preservative techniques can effectively control spoilage bacteria and other pathogens, and can inhibit the activities of a wide spectrum of organisms, including inherently resistant Gram-negative bacteria. Hurdle technology is a method of ensuring that pathogens in food products can be eliminated or controlled by combining more than one approach. These approaches can be thought of as "hurdles" the pathogen has to overcome if it is to remain active in the food. The right combination of hurdles can ensure all pathogens are eliminated or rendered harmless in the final product. Hurdle technology has been defined by Leistner (2000) as an intelligent combination of hurdles that secures the microbial safety and stability as well as the organoleptic and nutritional quality and the economic viability of food products. The organoleptic quality of the food refers to its sensory properties, that is its look, taste, smell, and texture. Examples of hurdles in a food system are high temperature during processing, low temperature during storage, increasing the acidity, lowering the water activity or redox potential, and the presence of preservatives or biopreservatives. According to the type of pathogens and how risky they are, the intensity of the hurdles can be adjusted individually to meet consumer preferences in an economical way, without sacrificing the safety of the product. |Principal hurdles used for food preservation (after Leistner, 1995)| |Low temperature||T||Chilling, freezing| |Reduced water activity||aw||Drying, curing, conserving| |Increased acidity||pH||Acid addition or formation| |Reduced redox potential||Eh||Removal of oxygen or addition of ascorbate| |Biopreservatives||Competitive flora such as microbial fermentation| |Other preservatives||Sorbates, sulfites, nitrites| - "Preserving Food without Freezing or Canning, Chelsea Green Publishing, 1999" - Stacy Simon (October 26, 2015). "World Health Organization Says Processed Meat Causes Cancer". Cancer.org. - James Gallagher (26 October 2015). "Processed meats do cause cancer - WHO". BBC. - "IARC Monographs evaluate consumption of red meat and processed meat" (PDF). International Agency for Research on Cancer. 26 October 2015. - Nummer, B. (2002). "Historical Origins of Food Preservation" http://nchfp.uga.edu/publications/nchfp/factsheets/food_pres_hist.html. (Accessed on May 5, 2014) - Msagati, T. (2012). "The Chemistry of Food Additives and Preservatives" - Nicolas Appert inventeur et humaniste by Jean-Paul Barbier, Paris, 1994 and http://www.appert-aina.com - anon., Food Irradation – A technique for preserving and improving the safety of food, WHO, Geneva, 1991 - World Health Organization. Wholesomeness of irradiated food. Geneva, Technical Report Series No. 659, 1981 - World Health Organization. High-Dose Irradiation: Wholesomeness of Food Irradiated With Doses Above 10 kGy. Report of a Joint FAO/IAEA/WHO Study Group. Geneva, Switzerland: World Health Organization; 1999. WHO Technical Report Series No. 890 - Hauther,W. & Worth, M., Zapped! Irradiation and the Death of Food, Food & Water Watch Press, Washington, DC, 2008 - Consumers International – Home[dead link] - NUCLEUS – Food Irradiation Clearances - Food irradiation – Position of ADA J Am Diet Assoc. 2000;100:246-253 - C.M. Deeley, M. Gao, R. Hunter, D.A.E. Ehlermann, The development of food irradiation in the Asia Pacific, the Americas and Europe; tutorial presented to the International Meeting on Radiation Processing, Kuala Lumpur, 2006. http://www.doubleia.org/index.php?sectionid=43&parentid=13&contentid=494 - Annis, P.C. and Dowsett, H.A. 1993. Low oxygen disinfestation of grain: exposure periods needed for high mortality. Proc. International Conference on Controlled Atmosphere and Fumigation. Winnipeg, June 1992, Caspit Press, Jerusalem, pp 71-83. - Annis, P.C. and Morton, R. 1997. The acute mortality effects of carbon dioxide on various life stages of Sitophilus oryzae. J. Stored Prod.Res. 33. 115-124 - Controlled Atmospheric Storage (CA) :: Washington State Apple Commission - Various authors, Session 1: Natural Air-Tight Storage In: Shejbal, J., ed., Controlled Atmosphere Storage of Grains, Elsevier: Amsterdam, 1-33 - Annis P.C. and Banks H.J. 1993. Is hermetic storage of grains feasible in modern agricultural systems? In "Pest control and sustainable agriculture" Eds S.A. Corey, D.J. Dall and W.M. Milne. CSIRO, Australia. 479-482 - Laine Welch (May 18, 2013). "Laine Welch: Fuel cell technology boosts long-distance fish shipping". Anchorage Daily News. Retrieved May 19, 2013. - NWT magazine, December 2012 - "High-Pressure Processing Keeps Food Safe". Military.com. Archived from the original on 2008-02-02. Retrieved 2008-12-16. Pressed inside a vessel exerting 70,000 pounds per square inch or more, food can be processed so that it retains its fresh appearance, flavor, texture and nutrients while disabling harmful microorganisms and slowing spoilage. - Ananou S, Maqueda M, Martínez-Bueno M and Valdivia E (2007) "Biopreservation, an ecological approach to improve the safety and shelf-life of foods" In: A. Méndez-Vilas (Ed.) Communicating Current Research and Educational Topics and Trends in Applied Microbiology, Formatex. ISBN 978-84-611-9423-0. - Yousef AE and Carolyn Carlstrom C (2003) Food microbiology: a laboratory manual Wiley, Page 226. ISBN 978-0-471-39105-0. - FAO: Preservation techniques Fisheries and aquaculture department, Rome. Updated 27 May 2005. Retrieved 14 March 2011. - Alzamora SM, Tapia MS and López-Malo A (2000) Minimally processed fruits and vegetables: fundamental aspects and applications Springer, Page 266. ISBN 978-0-8342-1672-3. - Alasalvar C (2010) Seafood Quality, Safety and Health Applications John Wiley and Sons, Page 203. ISBN 978-1-4051-8070-2. - Leistner I (2000) "Basic aspects of food preservation by hurdle technology" International Journal of Food Microbiology, 55:181–186. - Leistner L (1995) "Principles and applications of hurdle technology" In Gould GW (Ed.) New Methods of Food Preservation, Springer, pp. 1-21. ISBN 978-0-8342-1341-8. - Lee S (2004) "Microbial Safety of Pickled Fruits and Vegetables and Hurdle Technology" Internet Journal of Food Safety, 4: 21–32. - Riddervold, Astri. Food Conservation. ISBN 978-0-907325-40-6. - Abakarov, Nunes. "Thermal food processing optimization: algorithms and software" (PDF). Food Engineering. - Abakarov, Sushkov, Mascheroni. "Multi-criteria optimization and decision-making approach for improving of food engineering processes" (PDF). International Journal of Food Studies. |Wikimedia Commons has media related to Food preservation.| - A ca. 1894 Gustav Hammer & Co. commercial cooking machinery catalogue. - Dehydrating Food - Preserving foods ~ from the Clemson Extension Home and Garden Information Center - National Center for Home Food Preservation[dead link] - BBC News Online – US army food... just add urine - Home Economics Archive: Tradition, Research, History (HEARTH) An e-book collection of over 1,000 classic books on home economics spanning 1850 to 1950, created by Cornell University's Mann Library. - Survival guide – Refrigerate food without electrical power - Pulsed electric field processing for the food and beverage industry and scientific sectors
https://en.wikipedia.org/wiki/Food_preservation
4.15625
Georgia General Assembly A form of representative government has existed in Georgia since January 1751. Its modern embodiment, known as the Georgia General Assembly, is one of the largest state legislatures in the nation. The General Assembly consists of two chambers, the House of Representatives and the senate. The General Assembly has operated continuously since 1777, when Georgia became one of the thirteen original states and revoked its status as a colony of Great Britain. Since the General Assembly is the legislative body for the state, the location of its meetings has moved along with each move of the state capital. In its earliest days the legislature met first in Savannah, and subsequently in Augusta, Louisville, and Milledgeville. In 1868 the capital—and the assembly—settled permanently in Atlanta. Today the General Assembly meets in the state capitol, an impressive limestone and marble building with a distinctive gold dome and granite foundation. Each chamber is housed in a separate wing. Every two years, Georgia voters elect members of the legislature. These elections occur in even-numbered years (e.g., 2002, 2004, 2006). The qualifications for holding office in both houses, as well as the size of both chambers, are established in the Georgia state constitution. Thegovernor's interests in the chamber. Much of the work of the house is done in thirty-six standing committees. At the start of each two-year session, each member is assigned to two or three committees, which are organized by such topics as agriculture, education, or taxes. Each political party's leadership selects members to serve on the committees, which ensures that the parties are effectively represented in the process. Thus the party composition of committees is proportional to the party composition of the house. The Speaker of the House selects the chairs of each committee; since the Speaker belongs to the majority party in the chamber, all the committees are chaired by members of the majority party. Legislation passes through the committees, where it can be amended, changed, or killed. Members, therefore, actively seek to be placed on committees that deal with issues important to them personally and to their constituents. To serve in the House of Representatives, an individual must be at least twenty-one years old. Other requirements include residency for at least a year in the district that he or she represents and residency in Georgia for at least two years. The lieutenant governor. Unlike the Speaker, who is elected by the members of the house, the lieutenant governor is elected by all the voters of the state. Thus, the lieutenant governor may belong to a different political party than the majority of the senators, as was the case in the 2003-4 and 2005-6 sessions when Lieutenant Governor Mark Taylor, a Democrat, presided over a majority-Republican senate. This scenario requires careful political balancing and the investment of significant authority in the president pro tempore of the senate, who is the leader of the majority party. There are twenty-six committees in the senate, and senators are required to serve on at least three committees during their two-year terms in the General Assembly. As in the house, the party affiliations of senate committees are proportional to the party affiliations of the senate as a whole. The lieutenant governor appoints the chairs of the committees, which resulted in an unusual situation in the 2003-4 session. The Republican Party was the majority party in the senate, but the lieutenant governor appointed Democrats to chair some committees. To serve in the senate, an individual must be at least twenty-five years old. Other requirements include residency for at least a year in the district that he or she represents and residency in Georgia for at least two years. Each January representatives congregate at the state capitol for the start of the legislative session, which lasts for forty days, to deliberate matters of importance to the citizens of the state. The forty days are not always continuous, and during the time when the chambers are not in session, members generally work in committees or return home to meet with constituents. The General Assembly uses a committee system to accomplish its legislative tasks. Since they meet year round, even when the legislature is not in session, committees can consider legislative process in Georgia to move more efficiently. Typically, the legislature adjourns in late March, after the major legislative business has been completed. From time to time the governor may call the General Assembly into a special session for a set number of days. The most important function of the General Assembly is to pass the state's operating budget each year. In fact, approximately half of the hours spent in session are related to the budget. This includes establishing spending priorities and setting tax rates. Additionally, lawmakers must enact other laws on a broad array of topics from education to roads and transportation. Another task of the General Assembly is to consider all proposed amendments to the Georgia constitution. A two-thirds vote in both houses is the primary means for approving resolutions to place proposed constitutional changes on the ballot. Voters will then decide if the constitution is to be amended. A special task that the General Assembly must undertake every ten years is the drawing of legislative district lines to create the maps used for the state house and state senate district boundaries. The General Assembly also establishes the district lines for Georgia's delegation to the U.S. House of Representatives. A number of famous Georgians have served in the General Assembly. Jimmy Carter, the only Georgian ever to be elected president of the United States, served in the state senate during the 1960s. Several civil rights leaders, including Julian Bond and Hosea Williams, have served in the General Assembly. Most governors and U.S. senators from the state served in one of the two chambers before running for higher office. Several Hugh Gillis of Soperton, with more than fifty years of combined service in both houses of the legislature. No discussion of longevity in the General Assembly would be complete without mention of Tom Murphy of Bremen, who was Speaker of the House between 1974 and 2002. Murphy was the longest-serving Speaker in the nation when he was defeated in his 2002 reelection bid. The average General Assembly member is white and male; in 2015, 23 percent of the members were women and 25 percent were African American. Media Gallery: Georgia General Assembly
http://www.georgiaencyclopedia.org/articles/government-politics/georgia-general-assembly
4.0625
5 Written questions 5 Matching questions - homozygous recessive - DNA repair mechanism - cytosine (C) - a a pair of recessive alleles at a locus on homologous chromosomes; e.g., aa. - b One of several processes by which enzymes repair broken or mismatched DNA strands - c A nitrogen-containing base in nucleotides; also, base-pairs with guanine in DNA and RNA. - d allele effects that are masked by a dominant allele on the chromosome. - e The stage between mitotic divisions when a cell grows in mass, doubles its cytoplasmic, and replicates its DNA. 5 Multiple choice questions - Process by which a cell duplicates its DNA before it divides. - an allele that masks the effects of a recessive allele paired with it - removes phosphate group and inhibits mitosis - Stage of mitosis and meiosis in which chromosomes condense a become attached to spindles. - The location of a gene on a chromosome 5 True/False questions linkage group → All genes tend to stay together during meiosis but may be separated by cross-overs. genotype → specific alleles carried by an individual homologous chromosome → except for the nonidentical sex chromosomes, members of a pair have the same length, shape, and genes. continuous variation → a range of small differences in a trait zygote → Mature, haploid reproductive cell
https://quizlet.com/3276647/test
4.125
The following sets of guidelines or 'ground rules' are examples that can be given to a class for use, or can provide a basis for a discussion about developing an atmosphere of mutual respect and collective inquiry. Many teachers also find it productive to have a discussion with their students in which they collaboratively generate a list of discussion guidelines or community agreements to set expectations for their interactions. (from the CRLT GSI Guidebook.) Guidelines for Class Participation 1. Respect others’ rights to hold opinions and beliefs that differ from your own. Challenge or criticize the idea, not the person. 2. Listen carefully to what others are saying even when you disagree with what is being said. Comments that you make (asking for clarification, sharing critiques, expanding on a point, etc.) should reflect that you have paid attention to the speaker’s comments. 3. Be courteous. Don’t interrupt or engage in private conversations while others are speaking. 4. Support your statements. Use evidence and provide a rationale for your points. 5. Allow everyone the chance to talk. If you have much to say, try to hold back a bit; if you are hesitant to speak, look for opportunities to contribute to the discussion. Read more »
http://www.crlt.umich.edu/category/tags/inclusive-teaching?page=1
4.3125
If you like us, please share us on social media, tell your friends, tell your professor or consider building or adopting a Wikitext for your course. The Born-Oppenheimer approximation is a way to simplify the complicated Schrödinger equation for a molecule. The nucleus and electrons are attracted to each other with the same magnitude of electric charge, thus they exert the same force and momentum. While exerting the same kind of momentum, the nucleus, with a much larger mass in comparison to electron’s mass, will have a very small velocity that is almost negligible. Born-Oppenheimer takes advantage of this phenomenon and makes the assumption that since the nucleus is way heavier in mass compared to the electron, its motion can be ignored while solving the electronic Schrödinger equation; that is, the nucleus is assumed to be stationary while electrons move around it. The motion of the nuclei and the electrons can be separated and the electronic and nuclear problems can be solved with independent wavefunctions. The wavefunction for the molecule thus becomes: Ψmolecule= Ψelectronx Ψnuclei The principle of Born-Oppenheimer can be applied to calculate the bond length energy between molecules. By focusing on the specific separation between nucleus and electron, their wavefunction can be calculated. Thus, a molecule’s energy in relationship with its bond length can be examined.
http://chemwiki.ucdavis.edu/Theoretical_Chemistry/Chemical_Bonding/General_Principles_of_Chemical_Bonding/Born_Oppenheimer_Approximation
4.3125
The Choctaw freedmen were enslaved African Americans who were emancipated after the American Civil War and were granted citizenship in the Choctaw Nation. Their freedom and citizenship were requirements of the 1866 treaty the US made with the Choctaw; it required a new treaty because the Choctaw had sided with the Confederate States of America during the war. The Confederacy had promised the Choctaw and other tribes of Indian Territory a Native American state if it won the war. "Freedmen" is one of the terms given to the newly emancipated people after slavery was abolished in the United States. The Choctaw freedmen were officially adopted as full members into the Choctaw Nation in 1885. Like other Native American tribes, the Choctaw had customarily held slaves as captives from warfare. As they adopted elements of European culture, such as larger farms and plantations, they began to adapt their system to that of purchasing and holding chattel slave workers of African-American descent. Moshulatubbee had slaves, as did many of the European men, generally fur traders, who married into the Choctaw nation. The Folsom and LeFlore families were some of the Choctaw planters who held the most slaves at the time of Indian Removal and afterward. Slavery lasted in the Choctaw Nation until 1866. Former slaves of the Choctaw Nation would be called the Choctaw freedmen, and then and later, a number had Choctaw as well as African and sometimes European ancestry. At the time of Indian Removal, the Beams family was a part of the Choctaw Nation. They were known to have been of African descent and also free. |Wikimedia Commons has media related to Choctaw.| - African Americans with native heritage - Cherokee freedmen - Creek Freedmen - Black Seminoles - Oak Hill Industrial Academy - "1885 Choctaw & Chickasaw Freedmen Admitted To Citizenship". Retrieved 2008-09-04. - "The Choctaw Freedmen of Oklahoma". Retrieved 2008-02-14.
https://en.wikipedia.org/wiki/Choctaw_Freedmen
4.25
Civil rights movement The civil rights movement was a series of worldwide political movements for equal rights. There were many events where unequal people refused to do anything. They did this to show others that they are as equal in a nonviolent way. Other events were more violent with people who began to rebel against others. The process was long and sometimes did not affect anything in many countries. Many of these movements did not fully achieve their goals. However, their efforts helped improved legal rights of unequal groups of people. The main goals of the civil rights movement includes that the rights of all people are equally protected by the law, including the rights of minorities. Civil rights movements are different from each country. The LGBT rights movement, Women's rights movement and many racial minority rights movements are still continuing to fight for equal rights. Africa[change | change source] Angola[change | change source] The Angolan War of Independence was from 1961 to 1975. Angola fought against Portugal. Portugal was making people in Angola farm cotton. Three different groups in Angola were against Portugal. Millions of people died during the war. Guinea[change | change source] The Guinea-Bissau War of Independence was an armed conflict and national liberation struggle that took place in Portuguese Guinea (modern Guinea-Bissau) between 1963 and 1974. Mozambican[change | change source] The Mozambican War of Independence was a conflict from 1964 until 1975. It was between the Mozambique Liberation Front or Frelimo (French: Frente de Libertação de Moçambique) and Portugal. The Portuguese were successful during the conflict with guerilla forces. Because of a coup d'état in Portugal, Mozambique succeeded in having independence on 25 June 1975. Ireland[change | change source] Northern Ireland saw the formation of the Campaign for Social Justice in Belfast in 1964. This was followed by the Northern Ireland Civil Rights Association (NICRA) on 29 January 1967. They wanted to repeal Special Powers Acts of 1922, 1933, and 1943, the end of B Specials police force, and end the gerrymandering of local elections, an end to discrimination in housing and government jobs. These demands for reform caused a backlash by some in the unionist majority. This causes The Troubles to start a civil war that lasted for more than 30 years. The NICRA used the same plans used by the American Civil Rights Movement. Which were marches, pickets, sit-ins and protest. The first civil rights march in Northern Ireland was held on 24 August 1968 between Coalisland and Dungannon. United States[change | change source] Segregation[change | change source] Segregation was an attempt by white Southerners to separate the races. They did this to strengthen white pride and to be more powerful over African Americans. Segregation was often called the Jim Crow system. Segregation became common in Southern states following the end of Reconstruction in 1877. During Reconstruction, which followed the Civil War (1861-1865), Republican governments in the Southern states were run by blacks. The Reconstruction governments had passed laws opening up economic and political opportunities for blacks. By 1877 the Democratic Party had gained control of government in the Southern states, and these Southern Democrats wanted to reverse black advances made during Reconstruction. To that end, they began to pass local and state laws that specified certain places "For Whites Only" and others for "Colored". Blacks had separate schools, transportation, restaurants, and parks. Over the next 75 years, Jim Crow signs went up to separate the races in every possible place. The system of segregation also included the denial of voting rights, known as disfranchisement. Between 1890 and 1910 all Southern states passed laws that did not allow blacks to vote. These requirements included: the ability to read and write. Many blacks had no access to education and property ownership. Because blacks could not vote, they were powerless to prevent whites from segregating all aspects of Southern life. They could do little to stop discrimination in public places, education, economic opportunities, or housing. Conditions for blacks in Northern states were somewhat better. Blacks were usually free to vote in the North, but there were so few blacks that their voices were barely heard. Segregated facilities were not as common in the North. Blacks were usually denied entrance to the best hotels and restaurants. Schools in New England were usually integrated. However, those in the Midwest generally were not. Montgomery Bus Boycott[change | change source] On December 1, 1955, Rosa Parks, a member of the Montgomery, Alabama, branch of the NAACP (National Association for the Advancement of Colored People), was told to give up her seat on a city bus to a white person. When Parks refused to move, she was arrested. The local NAACP, led by Edgar D. Nixon, recognized that the arrest of Parks might rally local blacks to protest segregated buses. Montgomery's black community had long been angry about their mistreatment on city buses where white drivers were often rude and abusive. The community had previously considered a boycott of the buses. The Montgomery Bus Boycott was a success, with support from the 50,000 blacks in Montgomery. It lasted for more than a year. This event showed the American public that blacks in the South will not stop protesting until the end of segregation. A federal court ordered Montgomery's buses desegregated in November 1956. The boycott ended with the blacks winning the right to sit wherever they want. A young Baptist minister named Martin Luther King, Jr., was president of the Montgomery Improvement Association, the organization that directed the boycott. The protest made King a national figure. King became the president of the Southern Christian Leadership Conference (SCLC) when it was founded in 1957. SCLC wanted to celebrate the NAACP legal strategy by encouraging the use of nonviolence. These activities included marches, demonstrations, and boycotts. The violent white response to black direct action eventually forced the federal government to confront the issues of injustice and racism in the South. In addition to his large following among blacks, King had a powerful appeal to liberal Northerners that helped him influence national public opinion. His advocacy of nonviolence attracted supporters among peace activists. He formed alliances in the American Jewish community. He also developed supporters from the ministers of wealthy, influential Protestant congregations in Northern cities. King often preached to those congregations, where he raised funds for SCLC. Chicano Movement[change | change source] The Chicano Movement is a political, social, and cultural movements by Mexican Americans. The Chicano Movement addresses negative ethnic stereotypes of Mexican people in the media and by Americans. People such as Tiburcio Vasquez and Joaquin Murietta became folk heros to Mexican Americans. They refused to obey White Americans. American Indian Movement[change | change source] The American Indian Movement (AIM) is a Native American activist organization in the United States. It was founded in 1968 in Minneapolis, Minnesota. The organization was formed to stop issues concerning the Native American urban community in Minneapolis. They wanted to stop poverty, housing, treaty issues, and police harassment. Gender equality[change | change source] The first feminism equality was the suffrage rights. This led women the right to vote. The second feminism was the issue of economic equality. Lesbians are also part of women's rights. The lesbian feminist groups, such as the Lavender Menace, are a lesbian activism group. LGBT rights and gay liberation[change | change source] The events of the Hawaii Supreme Court promoted the United States Congress to create the Defense of Marriage Act in 1996. This act forbids federal government from accepting same-sex relationships to get married. Currently 30 states have passed state constitutional amendments that ban same-sex marriage: Currently 30 states have passed state constitutional amendments that ban same-sex marriage. However, Connecticut, Massachusetts, New Mexico, New Jersey, New York, Rhode Island, and Vermont legalized gay marriage. Before 1993, lesbian and gay people were not allowed to serve in the US military. Under the "Don't ask, don't tell" (DADT) policy, they were only allowed to serve in the military if they did not tell anyone of their sexual orientation. The Don't Ask, Don't Tell Repeal Act of 2010 allowed homosexual men and women to serve openly in the armed forces. Since September 20, 2011, gays, lesbians, and bisexuals have been able to serve openly. Transsexual and intersex service-members however are still banned from serving openly, due to Department of Defense medical policies which consider gender identity disorder to be a medically disqualifying condition. People who oppose gay rights in the United States have been political and religious conservatives. These people cite a number of Bible passages from the Old and New Testaments as their reason. The most opposition of gay rights are in the South and other states with a large rural population. Many organizations have opposed the gay rights movement. These include, American Family Association, the Christian Coalition, Family Research Council, Focus on the Family, Save Our Children, NARTH, the national Republican Party, the Roman Catholic Church, The Church of Jesus Christ of Latter-day Saints (LDS Church), the Southern Baptist Convention, Alliance for Marriage, Alliance Defense Fund, Liberty Counsel, and the National Organization for Marriage. A number of these groups have been named as anti-gay hate groups by the Southern Poverty Law Center. Germany[change | change source] The Civil Rights Movement in Germany was a left-wing backlash against the post-Nazi Party era of the country. The movement took place mostly among disillusioned students and was largely a protest movement to others around the globe during the late 1960s. France[change | change source] A general strike broke out across France in May 1968. It became a revolutionary problem. It was discouraged by the French Communist Party. It was finally suppressed by the government, which accused the communists of plotting against the Republic. Some philosophers and historians have argued that the rebellion was the single most important revolutionary event of the 20th century because it wasn't participated in by a lone demographic, such as workers or racial monorities, but was rather a purely popular uprising, superseding ethnic, cultural, age and class boundaries. Books[change | change source] - Manfred Berg and Martin H. Geyer; Two Cultures of Rights: The Quest for Inclusion and Participation in Modern America and Germany Cambridge University Press, 2002 - Jack Donnelly and Rhoda E. Howard; International Handbook of Human Rights Greenwood Press, 1987 - David P. Forsythe; Human Rights in the New Europe: Problems and Progress University of Nebraska Press, 1994 - Joe Foweraker and Todd Landman; Citizenship Rights and Social Movements: A Comparative and Statistical Analysis Oxford University Press, 1997 - Mervyn Frost; Constituting Human Rights: Global Civil Society and the Society of Democratic States Routledge, 2002 - Marc Galanter; Competing Equalities: Law and the Backward Classes in India University of California Press, 1984 - Raymond D. Gastil and Leonard R. Sussman, eds.; Freedom in the World: Political Rights and Civil Liberties, 1986-1987 Greenwood Press, 1987 - David Harris and Sarah Joseph; The International Covenant on Civil and Political Rights and United Kingdom Law Clarendon Press, 1995 - Steven Kasher; The Civil Rights Movement: A Photographic History (1954–1968) Abbeville Publishing Group (Abbeville Press, Inc.), 2000 - Francesca Klug, Keir Starmer, Stuart Weir; The Three Pillars of Liberty: Political Rights and Freedoms in the United Kingdom Routledge, 1996 - Fernando Santos-Granero and Frederica Barclay; Tamed Frontiers: Economy, Society, and Civil Rights in Upper Amazonia Westview Press, 2000 - Paul N. Smith; Feminism and the Third Republic: Women's Political and Civil Rights in France, 1918-1940 Clarendon Press, 1996 - Jorge M. Valadez; Deliberative Democracy: Political Legitimacy and Self-Determination in Multicultural Societies Westview Press, 2000 References[change | change source] - The Decolonization of Portuguese Africa: Metropolitan Revolution and the Dissolution of Empire by Norrie MacQueen - Mozambique since Independence: Confronting Leviathan by Margaret Hall, Tom Young - Author of Review: Stuart A. Notholt African Affairs, Vol. 97, No. 387 (Apr., 1998), pp. 276-278, JSTOR - Miner, Marlyce. "The American Indian Movement" - "Republican Party 2004 Platform" (PDF). http://www.gop.com/images/2004platform.pdf. - "LDS Newsroom – Same-Gender Attraction". April 8, 2008. http://newsroom.lds.org/ldsnewsroom/eng/public-issues/same-gender-attraction. Retrieved April 8, 2008. - "SBC Officially Opposes "Homosexual Marriage". The Southern Baptist Convention. July 26, 2003. http://www.reclaimamerica.org/PAGES/NEWS/news.aspx?story=1264. Retrieved July 5, 2006. - Schlatter, Evelyn, "18 Anti-Gay Groups and Their Propaganda", Intelligence Report Winter 2010 (140), http://www.splcenter.org/get-informed/intelligence-report/browse-all-issues/2010/winter/the-hard-liners, retrieved January 31, 2011 Other websites[change | change source] - We Shall Overcome: Historic Places of the Civil Rights Movement, a National Park Service Discover Our Shared Heritage at Travel Itinerary - A Columbia University Resource for Teaching African American History - Martin Luther King, Jr. and the Global Freedom Struggle, an encyclopedia presented by the Martin Luther King, Jr. Research and Education Institute at Stanford University - Civil Rights entry by Andrew Altman in the Stanford Encyclopedia of Philosophy - Martin Luther King, Jr. and the Global Freedom Struggle ~ an online multimedia encyclopedia presented by the King Institute at Stanford University, includes information on over 1000 civil rights movement figures, events and organizations - "CivilRightsTravel.com" ~ a visitors guide to key sites from the civil rights movement - The History Channel: Civil Rights Movement - Civil Rights in America: Connections to a Movement
https://simple.wikipedia.org/wiki/Civil_rights_movement
4.125
Although you might have heard people talk about a gene for red hair, green eyes or other characteristics, it's important to remember that genes code for proteins, not traits. While your genetic makeup does indeed determine physical traits like eye color, hair color and so forth, your genes affect these traits indirectly by way of the proteins created via DNA. Your DNA carries information in the sequence of base pairs of its nucleotides. These biological molecules, the building blocks of DNA, are often abbreviated with the first letter of their names: adenine (A), thymine (T), guanine (G) and cytosine (C). The types and sequence of nucleotides in DNA determine the types and sequence of nucleotides in RNA. This in turn determines the types and order of amino acids included in proteins. Specific three-letter groups of RNA nucleotides code for specific amino acids. The combination TTT, for example, codes for the amino acid phenylalanine. Regulatory regions of the gene also contribute to protein synthesis by determining when the gene will be switched on or off. In active genes, genetic information determines which proteins are synthesized and when synthesis is turned on or off. These proteins fold into complicated three-dimensional structures, somewhat like molecular origami. Because each amino acid has specific chemical characteristics, the sequence of amino acids determine the structure and shape of a protein. For example, some amino acids attract water, and others are repelled by it. Some amino acids can form weak bonds to each other, but others cannot. Different combinations and sequences of these chemical characteristics determine the unique three-dimensional folded shape of each protein Structure & Function The structure of a protein determines its function. Proteins that catalyze (accelerate) chemical reactions, for example, have "pockets," which can bind specific chemicals and make it easier for a particular reaction to occur. Variations in the DNA code of a gene can change either the structure of a protein or when and where it is produced. If these variations change the protein structure, they could also change its function. Variations in a gene can affect traits in several ways. Variations in proteins involved in growth and development, for example, can give rise to differences in physical features like height. Pigments of skin and hair color are produced by enzymes, proteins that catalyze chemical reactions. Variations in both the structure and quantity of the proteins produced give rise to different amounts of skin and hair pigment and therefore different colors of hair and skin. - Kimball's Biology Pages: The Genetic Code - Molecular Cell Biology; Harvey Lodish, Arnold Berk, Chris Kaiser, Monty Krieger, Matthew P. Scott, Anthony Bretscher, Hidde Ploegh and Paul Matsudaira - Kimball's Biology Pages: Proteins - Kimball's Biology Pages: Enzymes - Sandwalk: Human MC1R Gene Controls Hair Color and Skin Color - Jupiterimages/liquidlibrary/Getty Images
http://science.opposingviews.com/relationship-between-dna-bases-genes-proteins-traits-2074.html
4.0625
|This article does not cite any sources. (March 2013)| Electrical resonance occurs in an electric circuit at a particular resonance frequency when the imaginary parts of impedances or admittances of circuit elements cancel each other. In some circuits this happens when the impedance between the input and output of the circuit is almost zero and the transfer function is close to one. Resonance of a circuit involving capacitors and inductors occurs because the collapsing magnetic field of the inductor generates an electric current in its windings that charges the capacitor, and then the discharging capacitor provides an electric current that builds the magnetic field in the inductor. This process is repeated continually. An analogy is a mechanical pendulum. At resonance, the series impedance of the two elements is at a minimum and the parallel impedance is at maximum. Resonance is used for tuning and filtering, because it occurs at a particular frequency for given values of inductance and capacitance. It can be detrimental to the operation of communications circuits by causing unwanted sustained and transient oscillations that may cause noise, signal distortion, and damage to circuit elements. Parallel resonance or near-to-resonance circuits can be used to prevent the waste of electrical energy, which would otherwise occur while the inductor built its field or the capacitor charged and discharged. As an example, asynchronous motors waste inductive current while synchronous ones waste capacitive current. The use of the two types in parallel makes the inductor feed the capacitor, and vice versa, maintaining the same resonant current in the circuit, and converting all the current into useful work. Since the inductive reactance and the capacitive reactance are of equal magnitude, ωL = 1/ωC, so: The quality of the resonance (how long it will ring when excited) is determined by its Q factor, which is a function of resistance. A true LC circuit would have infinite Q, but all real circuits have some resistance and smaller Q and are usually approximated more accurately by an RLC circuit. An RLC circuit (or LCR circuit) is an electrical circuit consisting of a resistor, an inductor, and a capacitor, connected in series or in parallel. The RLC part of the name is due to those letters being the usual electrical symbols for resistance, inductance and capacitance respectively. The circuit forms a harmonic oscillator for current and resonates similarly to an LC circuit. The main difference stemming from the presence of the resistor is that any oscillation induced in the circuit decays over time if it is not kept going by a source. This effect of the resistor is called damping. The presence of the resistance also reduces the peak resonant frequency. Some resistance is unavoidable in real circuits, even if a resistor is not specifically included as a component. A pure LC circuit is an ideal that exists only in theory. There are many applications for this circuit. It is used in many different types of oscillator circuits. An important application is for tuning, such as in radio receivers or television sets, where they are used to select a narrow range of frequencies from the ambient radio waves. In this role the circuit is often referred to as a tuned circuit. An RLC circuit can be used as a band-pass filter, band-stop filter, low-pass filter or high-pass filter. The tuning application, for instance, is an example of band-pass filtering. The RLC filter is described as a second-order circuit, meaning that any voltage or current in the circuit can be described by a second-order differential equation in circuit analysis. The three circuit elements can be combined in a number of different topologies. All three elements in series or all three elements in parallel are the simplest in concept and the most straightforward to analyse. There are, however, other arrangements, some with practical importance in real circuits. One issue often encountered is the need to take into account inductor resistance. Inductors are typically constructed from coils of wire, the resistance of which is not usually desirable, but it often has a significant effect on the circuit. - Antenna theory - Cavity resonator - Electronic oscillator - Electronic filter - Resonant energy transfer - wireless energy transmission between two resonant coils
https://en.wikipedia.org/wiki/Electrical_resonance
4.34375
X-ray crystallography is a tool used for identifying the atomic and molecular structure of a crystal, in which the crystalline atoms cause a beam of incident X-rays to diffract into many specific directions. By measuring the angles and intensities of these diffracted beams, a crystallographer can produce a three-dimensional picture of the density of electrons within the crystal. From this electron density, the mean positions of the atoms in the crystal can be determined, as well as their chemical bonds, their disorder and various other information. Since many materials can form crystals—such as salts, metals, minerals, semiconductors, as well as various inorganic, organic and biological molecules—X-ray crystallography has been fundamental in the development of many scientific fields. In its first decades of use, this method determined the size of atoms, the lengths and types of chemical bonds, and the atomic-scale differences among various materials, especially minerals and alloys. The method also revealed the structure and function of many biological molecules, including vitamins, drugs, proteins and nucleic acids such as DNA. X-ray crystallography is still the chief method for characterizing the atomic structure of new materials and in discerning materials that appear similar by other experiments. X-ray crystal structures can also account for unusual electronic or elastic properties of a material, shed light on chemical interactions and processes, or serve as the basis for designing pharmaceuticals against diseases. In a single-crystal X-ray diffraction measurement, a crystal is mounted on a goniometer. The goniometer is used to position the crystal at selected orientations. The crystal is illuminated with a finely focused monochromatic beam of X-rays, producing a diffraction pattern of regularly spaced spots known as reflections. The two-dimensional images taken at different orientations are converted into a three-dimensional model of the density of electrons within the crystal using the mathematical method of Fourier transforms, combined with chemical data known for the sample. Poor resolution (fuzziness) or even errors may result if the crystals are too small, or not uniform enough in their internal makeup. X-ray crystallography is related to several other methods for determining atomic structures. Similar diffraction patterns can be produced by scattering electrons or neutrons, which are likewise interpreted by Fourier transformation. If single crystals of sufficient size cannot be obtained, various other X-ray methods can be applied to obtain less detailed information; such methods include fiber diffraction, powder diffraction and (if the sample is not crystallized) small-angle X-ray scattering (SAXS). If the material under investigation is only available in the form of nanocrystalline powders or suffers from poor crystallinity, the methods of electron crystallography can be applied for determining the atomic structure. For all above mentioned X-ray diffraction methods, the scattering is elastic; the scattered X-rays have the same wavelength as the incoming X-ray. By contrast, inelastic X-ray scattering methods are useful in studying excitations of the sample, rather than the distribution of its atoms. - 1 History - 2 Contributions to chemistry and material science - 3 Relationship to other scattering techniques - 4 Methods - 4.1 Overview of single-crystal X-ray diffraction - 4.2 Crystallization - 4.3 Data collection - 4.4 Data analysis - 4.5 Deposition of the structure - 5 Diffraction theory - 6 Nobel Prizes for X-ray Crystallography - 7 See also - 8 References - 9 Further reading - 10 External links Early scientific history of crystals and X-rays Crystals have long been admired for their regularity and symmetry, but they were not investigated scientifically until the 17th century. Johannes Kepler hypothesized in his work Strena seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow) (1611) that the hexagonal symmetry of snowflake crystals was due to a regular packing of spherical water particles. Crystal symmetry was first investigated experimentally by Danish scientist Nicolas Steno (1669), who showed that the angles between the faces are the same in every exemplar of a particular type of crystal, and by René Just Haüy (1784), who discovered that every face of a crystal can be described by simple stacking patterns of blocks of the same shape and size. Hence, William Hallowes Miller in 1839 was able to give each face a unique label of three small integers, the Miller indices which are still used today for identifying crystal faces. Haüy's study led to the correct idea that crystals are a regular three-dimensional array (a Bravais lattice) of atoms and molecules; a single unit cell is repeated indefinitely along three principal directions that are not necessarily perpendicular. In the 19th century, a complete catalog of the possible symmetries of a crystal was worked out by Johan Hessel, Auguste Bravais, Evgraf Fedorov, Arthur Schönflies and (belatedly) William Barlow. From the available data and physical reasoning, Barlow proposed several crystal structures in the 1880s that were validated later by X-ray crystallography; however, the available data were too scarce in the 1880s to accept his models as conclusive. X-rays were discovered by Wilhelm Röntgen in 1895, just as the studies of crystal symmetry were being concluded. Physicists were initially uncertain of the nature of X-rays, although it was soon suspected (correctly) that they were waves of electromagnetic radiation, in other words, another form of light. At that time, the wave model of light—specifically, the Maxwell theory of electromagnetic radiation—was well accepted among scientists, and experiments by Charles Glover Barkla showed that X-rays exhibited phenomena associated with electromagnetic waves, including transverse polarization and spectral lines akin to those observed in the visible wavelengths. Single-slit experiments in the laboratory of Arnold Sommerfeld suggested the wavelength of X-rays was about 1 angstrom. However, X-rays are composed of photons, and thus are not only waves of electromagnetic radiation but also exhibit particle-like properties. The photon concept was introduced by Albert Einstein in 1905, but it was not broadly accepted until 1922, when Arthur Compton confirmed it by the scattering of X-rays from electrons. Therefore, these particle-like properties of X-rays, such as their ionization of gases, caused William Henry Bragg to argue in 1907 that X-rays were not electromagnetic radiation. Nevertheless, Bragg's view was not broadly accepted and the observation of X-ray diffraction by Max von Laue in 1912 confirmed for most scientists that X-rays were a form of electromagnetic radiation. Crystals are regular arrays of atoms, and X-rays can be considered waves of electromagnetic radiation. Atoms scatter X-ray waves, primarily through the atoms' electrons. Just as an ocean wave striking a lighthouse produces secondary circular waves emanating from the lighthouse, so an X-ray striking an electron produces secondary spherical waves emanating from the electron. This phenomenon is known as elastic scattering, and the electron (or lighthouse) is known as the scatterer. A regular array of scatterers produces a regular array of spherical waves. Although these waves cancel one another out in most directions through destructive interference, they add constructively in a few specific directions, determined by Bragg's law: Here d is the spacing between diffracting planes, is the incident angle, n is any integer, and λ is the wavelength of the beam. These specific directions appear as spots on the diffraction pattern called reflections. Thus, X-ray diffraction results from an electromagnetic wave (the X-ray) impinging on a regular array of scatterers (the repeating arrangement of atoms within the crystal). X-rays are used to produce the diffraction pattern because their wavelength λ is typically the same order of magnitude (1–100 angstroms) as the spacing d between planes in the crystal. In principle, any wave impinging on a regular array of scatterers produces diffraction, as predicted first by Francesco Maria Grimaldi in 1665. To produce significant diffraction, the spacing between the scatterers and the wavelength of the impinging wave should be similar in size. For illustration, the diffraction of sunlight through a bird's feather was first reported by James Gregory in the later 17th century. The first artificial diffraction gratings for visible light were constructed by David Rittenhouse in 1787, and Joseph von Fraunhofer in 1821. However, visible light has too long a wavelength (typically, 5500 angstroms) to observe diffraction from crystals. Prior to the first X-ray diffraction experiments, the spacings between lattice planes in a crystal were not known with certainty. The idea that crystals could be used as a diffraction grating for X-rays arose in 1912 in a conversation between Paul Peter Ewald and Max von Laue in the English Garden in Munich. Ewald had proposed a resonator model of crystals for his thesis, but this model could not be validated using visible light, since the wavelength was much larger than the spacing between the resonators. Von Laue realized that electromagnetic radiation of a shorter wavelength was needed to observe such small spacings, and suggested that X-rays might have a wavelength comparable to the unit-cell spacing in crystals. Von Laue worked with two technicians, Walter Friedrich and his assistant Paul Knipping, to shine a beam of X-rays through a copper sulfate crystal and record its diffraction on a photographic plate. After being developed, the plate showed a large number of well-defined spots arranged in a pattern of intersecting circles around the spot produced by the central beam. Von Laue developed a law that connects the scattering angles and the size and orientation of the unit-cell spacings in the crystal, for which he was awarded the Nobel Prize in Physics in 1914. As described in the mathematical derivation below, the X-ray scattering is determined by the density of electrons within the crystal. Since the energy of an X-ray is much greater than that of a valence electron, the scattering may be modeled as Thomson scattering, the interaction of an electromagnetic ray with a free electron. This model is generally adopted to describe the polarization of the scattered radiation. The intensity of Thomson scattering for one particle with mass m and charge q is: Hence the atomic nuclei, which are much heavier than an electron, contribute negligibly to the scattered X-rays. Development from 1912 to 1920 After Von Laue's pioneering research, the field developed rapidly, most notably by physicists William Lawrence Bragg and his father William Henry Bragg. In 1912–1913, the younger Bragg developed Bragg's law, which connects the observed scattering with reflections from evenly spaced planes within the crystal. The Braggs, father and son, shared the 1915 Nobel Prize in Physics for their work in crystallography. The earliest structures were generally simple and marked by one-dimensional symmetry. However, as computational and experimental methods improved over the next decades, it became feasible to deduce reliable atomic positions for more complicated two- and three-dimensional arrangements of atoms in the unit-cell. The potential of X-ray crystallography for determining the structure of molecules and minerals—then only known vaguely from chemical and hydrodynamic experiments—was realized immediately. The earliest structures were simple inorganic crystals and minerals, but even these revealed fundamental laws of physics and chemistry. The first atomic-resolution structure to be "solved" (i.e., determined) in 1914 was that of table salt. The distribution of electrons in the table-salt structure showed that crystals are not necessarily composed of covalently bonded molecules, and proved the existence of ionic compounds. The structure of diamond was solved in the same year, proving the tetrahedral arrangement of its chemical bonds and showing that the length of C–C single bond was 1.52 angstroms. Other early structures included copper, calcium fluoride (CaF2, also known as fluorite), calcite (CaCO3) and pyrite (FeS2) in 1914; spinel (MgAl2O4) in 1915; the rutile and anatase forms of titanium dioxide (TiO2) in 1916; pyrochroite Mn(OH)2 and, by extension, brucite Mg(OH)2 in 1919;. Also in 1919 sodium nitrate (NaNO3) and caesium dichloroiodide (CsICl2) were determined by Ralph Walter Graystone Wyckoff, and the wurtzite (hexagonal ZnS) structure became known in 1920. The structure of graphite was solved in 1916 by the related method of powder diffraction, which was developed by Peter Debye and Paul Scherrer and, independently, by Albert Hull in 1917. The structure of graphite was determined from single-crystal diffraction in 1924 by two groups independently. Hull also used the powder method to determine the structures of various metals, such as iron and magnesium. Cultural and aesthetic importance of X-ray crystallography In what has been called his scientific autobiography, The Development of X-ray Analysis, Sir William Lawrence Bragg mentioned that he believed the field of crystallography was particularly welcoming to women because the techno-aesthetics of the molecular structures resembled textiles and household objects. Bragg was known to compare crystal formation to "curtains, wallpapers, mosaics, and roses." In 1951, the Festival Pattern Group at the Festival of Britain hosted a collaborative group of textile manufacturers and experienced crystallographers to design lace and prints based on the X-ray crystallography of insulin, china clay, and hemoglobin. One of the leading scientists of the project was Dr. Helen Megaw (1907–2002), the Assistant Director of Research at the Cavendish Laboratory in Cambridge at the time. Megaw is credited as one of the central figures who took inspiration from crystal diagrams and saw their potential in design. In 2008, the Wellcome Collection in London curated an exhibition on the Festival Pattern Group called "From Atom to Patterns." Contributions to chemistry and material science X-ray crystallography has led to a better understanding of chemical bonds and non-covalent interactions. The initial studies revealed the typical radii of atoms, and confirmed many theoretical models of chemical bonding, such as the tetrahedral bonding of carbon in the diamond structure, the octahedral bonding of metals observed in ammonium hexachloroplatinate (IV), and the resonance observed in the planar carbonate group and in aromatic molecules. Kathleen Lonsdale's 1928 structure of hexamethylbenzene established the hexagonal symmetry of benzene and showed a clear difference in bond length between the aliphatic C–C bonds and aromatic C–C bonds; this finding led to the idea of resonance between chemical bonds, which had profound consequences for the development of chemistry. Her conclusions were anticipated by William Henry Bragg, who published models of naphthalene and anthracene in 1921 based on other molecules, an early form of molecular replacement. Also in the 1920s, Victor Moritz Goldschmidt and later Linus Pauling developed rules for eliminating chemically unlikely structures and for determining the relative sizes of atoms. These rules led to the structure of brookite (1928) and an understanding of the relative stability of the rutile, brookite and anatase forms of titanium dioxide. The distance between two bonded atoms is a sensitive measure of the bond strength and its bond order; thus, X-ray crystallographic studies have led to the discovery of even more exotic types of bonding in inorganic chemistry, such as metal-metal double bonds, metal-metal quadruple bonds, and three-center, two-electron bonds. X-ray crystallography—or, strictly speaking, an inelastic Compton scattering experiment—has also provided evidence for the partly covalent character of hydrogen bonds. In the field of organometallic chemistry, the X-ray structure of ferrocene initiated scientific studies of sandwich compounds, while that of Zeise's salt stimulated research into "back bonding" and metal-pi complexes. Finally, X-ray crystallography had a pioneering role in the development of supramolecular chemistry, particularly in clarifying the structures of the crown ethers and the principles of host-guest chemistry. In material sciences, many complicated inorganic and organometallic systems have been analyzed using single-crystal methods, such as fullerenes, metalloporphyrins, and other complicated compounds. Single-crystal diffraction is also used in the pharmaceutical industry, due to recent problems with polymorphs. The major factors affecting the quality of single-crystal structures are the crystal's size and regularity; recrystallization is a commonly used technique to improve these factors in small-molecule crystals. The Cambridge Structural Database contains over 500,000 structures; over 99% of these structures were determined by X-ray diffraction. Mineralogy and metallurgy Since the 1920s, X-ray diffraction has been the principal method for determining the arrangement of atoms in minerals and metals. The application of X-ray crystallography to mineralogy began with the structure of garnet, which was determined in 1924 by Menzer. A systematic X-ray crystallographic study of the silicates was undertaken in the 1920s. This study showed that, as the Si/O ratio is altered, the silicate crystals exhibit significant changes in their atomic arrangements. Machatschki extended these insights to minerals in which aluminium substitutes for the silicon atoms of the silicates. The first application of X-ray crystallography to metallurgy likewise occurred in the mid-1920s. Most notably, Linus Pauling's structure of the alloy Mg2Sn led to his theory of the stability and structure of complex ionic crystals. On October 17, 2012, the Curiosity rover on the planet Mars at "Rocknest" performed the first X-ray diffraction analysis of Martian soil. The results from the rover's CheMin analyzer revealed the presence of several minerals, including feldspar, pyroxenes and olivine, and suggested that the Martian soil in the sample was similar to the "weathered basaltic soils" of Hawaiian volcanoes. Early organic and small biological molecules The first structure of an organic compound, hexamethylenetetramine, was solved in 1923. This was followed by several studies of long-chain fatty acids, which are an important component of biological membranes. In the 1930s, the structures of much larger molecules with two-dimensional complexity began to be solved. A significant advance was the structure of phthalocyanine, a large planar molecule that is closely related to porphyrin molecules important in biology, such as heme, corrin and chlorophyll. X-ray crystallography of biological molecules took off with Dorothy Crowfoot Hodgkin, who solved the structures of cholesterol (1937), penicillin (1946) and vitamin B12 (1956), for which she was awarded the Nobel Prize in Chemistry in 1964. In 1969, she succeeded in solving the structure of insulin, on which she worked for over thirty years. Biological macromolecular crystallography Crystal structures of proteins (which are irregular and hundreds of times larger than cholesterol) began to be solved in the late 1950s, beginning with the structure of sperm whale myoglobin by Sir John Cowdery Kendrew, for which he shared the Nobel Prize in Chemistry with Max Perutz in 1962. Since that success, over 86817 X-ray crystal structures of proteins, nucleic acids and other biological molecules have been determined. For comparison, the nearest competing method in terms of structures analyzed is nuclear magnetic resonance (NMR) spectroscopy, which has resolved 9561 chemical structures. Moreover, crystallography can solve structures of arbitrarily large molecules, whereas solution-state NMR is restricted to relatively small ones (less than 70 kDa). X-ray crystallography is now used routinely by scientists to determine how a pharmaceutical drug interacts with its protein target and what changes might improve it. However, intrinsic membrane proteins remain challenging to crystallize because they require detergents or other means to solubilize them in isolation, and such detergents often interfere with crystallization. Such membrane proteins are a large component of the genome and include many proteins of great physiological importance, such as ion channels and receptors. Helium cryogenics are used to prevent radiation damage in protein crystals. On the other end of the size scale, relative small molecules are able to lure the resolving power of X-ray crystallography. The structure assigned in 1991 to the antibiotic isolated from a marina organism, diazonamide A - C40H34Cl2N6O6 with M = 765.65 - proved to be incorrect by the classical prove of structure: a synthetic sample was not identical to the natural product. The mistake was possible because of the inability of X-ray crystallography to distinguish between the correct -OH / >NH and the interchanged -NH2 / -O- groups in the incorrect structure. Relationship to other scattering techniques Elastic vs. inelastic scattering X-ray crystallography is a form of elastic scattering; the outgoing X-rays have the same energy, and thus same wavelength, as the incoming X-rays, only with altered direction. By contrast, inelastic scattering occurs when energy is transferred from the incoming X-ray to the crystal, e.g., by exciting an inner-shell electron to a higher energy level. Such inelastic scattering reduces the energy (or increases the wavelength) of the outgoing beam. Inelastic scattering is useful for probing such excitations of matter, but not in determining the distribution of scatterers within the matter, which is the goal of X-ray crystallography. X-rays range in wavelength from 10 to 0.01 nanometers; a typical wavelength used for crystallography is 1 Å (0.1 nm), which is on the scale of covalent chemical bonds and the radius of a single atom. Longer-wavelength photons (such as ultraviolet radiation) would not have sufficient resolution to determine the atomic positions. At the other extreme, shorter-wavelength photons such as gamma rays are difficult to produce in large numbers, difficult to focus, and interact too strongly with matter, producing particle-antiparticle pairs. Therefore, X-rays are the "sweetspot" for wavelength when determining atomic-resolution structures from the scattering of electromagnetic radiation. Other X-ray techniques Other forms of elastic X-ray scattering include powder diffraction, Small-Angle X-ray Scattering (SAXS) and several types of X-ray fiber diffraction, which was used by Rosalind Franklin in determining the double-helix structure of DNA. In general, single-crystal X-ray diffraction offers more structural information than these other techniques; however, it requires a sufficiently large and regular crystal, which is not always available. These scattering methods generally use monochromatic X-rays, which are restricted to a single wavelength with minor deviations. A broad spectrum of X-rays (that is, a blend of X-rays with different wavelengths) can also be used to carry out X-ray diffraction, a technique known as the Laue method. This is the method used in the original discovery of X-ray diffraction. Laue scattering provides much structural information with only a short exposure to the X-ray beam, and is therefore used in structural studies of very rapid events (Time resolved crystallography). However, it is not as well-suited as monochromatic scattering for determining the full atomic structure of a crystal and therefore works better with crystals with relatively simple atomic arrangements. The Laue back reflection mode records X-rays scattered backwards from a broad spectrum source. This is useful if the sample is too thick for X-rays to transmit through it. The diffracting planes in the crystal are determined by knowing that the normal to the diffracting plane bisects the angle between the incident beam and the diffracted beam. A Greninger chart can be used to interpret the back reflection Laue photograph. Electron and neutron diffraction Other particles, such as electrons and neutrons, may be used to produce a diffraction pattern. Although electron, neutron, and X-ray scattering are based on different physical processes, the resulting diffraction patterns are analyzed using the same coherent diffraction imaging techniques. As derived below, the electron density within the crystal and the diffraction patterns are related by a simple mathematical method, the Fourier transform, which allows the density to be calculated relatively easily from the patterns. However, this works only if the scattering is weak, i.e., if the scattered beams are much less intense than the incoming beam. Weakly scattered beams pass through the remainder of the crystal without undergoing a second scattering event. Such re-scattered waves are called "secondary scattering" and hinder the analysis. Any sufficiently thick crystal will produce secondary scattering, but since X-rays interact relatively weakly with the electrons, this is generally not a significant concern. By contrast, electron beams may produce strong secondary scattering even for relatively thin crystals (>100 nm). Since this thickness corresponds to the diameter of many viruses, a promising direction is the electron diffraction of isolated macromolecular assemblies, such as viral capsids and molecular machines, which may be carried out with a cryo-electron microscope. Moreover, the strong interaction of electrons with matter (about 1000 times stronger than for X-rays) allows determination of the atomic structure of extremely small volumes. The field of applications for electron crystallography ranges from bio molecules like membrane proteins over organic thin films to the complex structures of (nanocrystalline) intermetallic compounds and zeolites. Neutron diffraction is an excellent method for structure determination, although it has been difficult to obtain intense, monochromatic beams of neutrons in sufficient quantities. Traditionally, nuclear reactors have been used, although the new Spallation Neutron Source holds much promise in the near future. Being uncharged, neutrons scatter much more readily from the atomic nuclei rather than from the electrons. Therefore, neutron scattering is very useful for observing the positions of light atoms with few electrons, especially hydrogen, which is essentially invisible in the X-ray diffraction. Neutron scattering also has the remarkable property that the solvent can be made invisible by adjusting the ratio of normal water, H2O, and heavy water, D2O. Overview of single-crystal X-ray diffraction The oldest and most precise method of X-ray crystallography is single-crystal X-ray diffraction, in which a beam of X-rays strikes a single crystal, producing scattered beams. When they land on a piece of film or other detector, these beams make a diffraction pattern of spots; the strengths and angles of these beams are recorded as the crystal is gradually rotated. Each spot is called a reflection, since it corresponds to the reflection of the X-rays from one set of evenly spaced planes within the crystal. For single crystals of sufficient purity and regularity, X-ray diffraction data can determine the mean chemical bond lengths and angles to within a few thousandths of an angstrom and to within a few tenths of a degree, respectively. The atoms in a crystal are not static, but oscillate about their mean positions, usually by less than a few tenths of an angstrom. X-ray crystallography allows measuring the size of these oscillations. The technique of single-crystal X-ray crystallography has three basic steps. The first—and often most difficult—step is to obtain an adequate crystal of the material under study. The crystal should be sufficiently large (typically larger than 0.1 mm in all dimensions), pure in composition and regular in structure, with no significant internal imperfections such as cracks or twinning. In the second step, the crystal is placed in an intense beam of X-rays, usually of a single wavelength (monochromatic X-rays), producing the regular pattern of reflections. As the crystal is gradually rotated, previous reflections disappear and new ones appear; the intensity of every spot is recorded at every orientation of the crystal. Multiple data sets may have to be collected, with each set covering slightly more than half a full rotation of the crystal and typically containing tens of thousands of reflections. In the third step, these data are combined computationally with complementary chemical information to produce and refine a model of the arrangement of atoms within the crystal. The final, refined model of the atomic arrangement—now called a crystal structure—is usually stored in a public database. As the crystal's repeating unit, its unit cell, becomes larger and more complex, the atomic-level picture provided by X-ray crystallography becomes less well-resolved (more "fuzzy") for a given number of observed reflections. Two limiting cases of X-ray crystallography—"small-molecule" and "macromolecular" crystallography—are often discerned. Small-molecule crystallography typically involves crystals with fewer than 100 atoms in their asymmetric unit; such crystal structures are usually so well resolved that the atoms can be discerned as isolated "blobs" of electron density. By contrast, macromolecular crystallography often involves tens of thousands of atoms in the unit cell. Such crystal structures are generally less well-resolved (more "smeared out"); the atoms and chemical bonds appear as tubes of electron density, rather than as isolated atoms. In general, small molecules are also easier to crystallize than macromolecules; however, X-ray crystallography has proven possible even for viruses with hundreds of thousands of atoms. Though normally x-ray crystallography can only be performed if the sample is in crystal form, new research has been done into sampling non-crystalline forms of samples. Although crystallography can be used to characterize the disorder in an impure or irregular crystal, crystallography generally requires a pure crystal of high regularity to solve the structure of a complicated arrangement of atoms. Pure, regular crystals can sometimes be obtained from natural or synthetic materials, such as samples of metals, minerals or other macroscopic materials. The regularity of such crystals can sometimes be improved with macromolecular crystal annealing and other methods. However, in many cases, obtaining a diffraction-quality crystal is the chief barrier to solving its atomic-resolution structure. Small-molecule and macromolecular crystallography differ in the range of possible techniques used to produce diffraction-quality crystals. Small molecules generally have few degrees of conformational freedom, and may be crystallized by a wide range of methods, such as chemical vapor deposition and recrystallization. By contrast, macromolecules generally have many degrees of freedom and their crystallization must be carried out to maintain a stable structure. For example, proteins and larger RNA molecules cannot be crystallized if their tertiary structure has been unfolded; therefore, the range of crystallization conditions is restricted to solution conditions in which such molecules remain folded. Protein crystals are almost always grown in solution. The most common approach is to lower the solubility of its component molecules very gradually; if this is done too quickly, the molecules will precipitate from solution, forming a useless dust or amorphous gel on the bottom of the container. Crystal growth in solution is characterized by two steps: nucleation of a microscopic crystallite (possibly having only 100 molecules), followed by growth of that crystallite, ideally to a diffraction-quality crystal. The solution conditions that favor the first step (nucleation) are not always the same conditions that favor the second step (subsequent growth). The crystallographer's goal is to identify solution conditions that favor the development of a single, large crystal, since larger crystals offer improved resolution of the molecule. Consequently, the solution conditions should disfavor the first step (nucleation) but favor the second (growth), so that only one large crystal forms per droplet. If nucleation is favored too much, a shower of small crystallites will form in the droplet, rather than one large crystal; if favored too little, no crystal will form whatsoever. Other approaches involves, crystallizing proteins under oil, where aqueous protein solutions are dispensed under liquid oil, and water evaporates through the layer of oil. Different oils have different evaporation permeabilities, therefore yielding changes in concentration rates from different percipient/protein mixture. The technique relies on bringing the protein directly into the nucleation zone by mixing protein with the appropriate amount of percipient to prevent the diffusion of water out of the drop. It is extremely difficult to predict good conditions for nucleation or growth of well-ordered crystals. In practice, favorable conditions are identified by screening; a very large batch of the molecules is prepared, and a wide variety of crystallization solutions are tested. Hundreds, even thousands, of solution conditions are generally tried before finding the successful one. The various conditions can use one or more physical mechanisms to lower the solubility of the molecule; for example, some may change the pH, some contain salts of the Hofmeister series or chemicals that lower the dielectric constant of the solution, and still others contain large polymers such as polyethylene glycol that drive the molecule out of solution by entropic effects. It is also common to try several temperatures for encouraging crystallization, or to gradually lower the temperature so that the solution becomes supersaturated. These methods require large amounts of the target molecule, as they use high concentration of the molecule(s) to be crystallized. Due to the difficulty in obtaining such large quantities (milligrams) of crystallization-grade protein, robots have been developed that are capable of accurately dispensing crystallization trial drops that are in the order of 100 nanoliters in volume. This means that 10-fold less protein is used per experiment when compared to crystallization trials set up by hand (in the order of 1 microliter). Several factors are known to inhibit or mar crystallization. The growing crystals are generally held at a constant temperature and protected from shocks or vibrations that might disturb their crystallization. Impurities in the molecules or in the crystallization solutions are often inimical to crystallization. Conformational flexibility in the molecule also tends to make crystallization less likely, due to entropy. Ironically, molecules that tend to self-assemble into regular helices are often unwilling to assemble into crystals. Crystals can be marred by twinning, which can occur when a unit cell can pack equally favorably in multiple orientations; although recent advances in computational methods may allow solving the structure of some twinned crystals. Having failed to crystallize a target molecule, a crystallographer may try again with a slightly modified version of the molecule; even small changes in molecular properties can lead to large differences in crystallization behavior. Mounting the crystal The crystal is mounted for measurements so that it may be held in the X-ray beam and rotated. There are several methods of mounting. In the past, crystals were loaded into glass capillaries with the crystallization solution (the mother liquor). Nowadays, crystals of small molecules are typically attached with oil or glue to a glass fiber or a loop, which is made of nylon or plastic and attached to a solid rod. Protein crystals are scooped up by a loop, then flash-frozen with liquid nitrogen. This freezing reduces the radiation damage of the X-rays, as well as the noise in the Bragg peaks due to thermal motion (the Debye-Waller effect). However, untreated protein crystals often crack if flash-frozen; therefore, they are generally pre-soaked in a cryoprotectant solution before freezing. Unfortunately, this pre-soak may itself cause the crystal to crack, ruining it for crystallography. Generally, successful cryo-conditions are identified by trial and error. The capillary or loop is mounted on a goniometer, which allows it to be positioned accurately within the X-ray beam and rotated. Since both the crystal and the beam are often very small, the crystal must be centered within the beam to within ~25 micrometers accuracy, which is aided by a camera focused on the crystal. The most common type of goniometer is the "kappa goniometer", which offers three angles of rotation: the ω angle, which rotates about an axis perpendicular to the beam; the κ angle, about an axis at ~50° to the ω axis; and, finally, the φ angle about the loop/capillary axis. When the κ angle is zero, the ω and φ axes are aligned. The κ rotation allows for convenient mounting of the crystal, since the arm in which the crystal is mounted may be swung out towards the crystallographer. The oscillations carried out during data collection (mentioned below) involve the ω axis only. An older type of goniometer is the four-circle goniometer, and its relatives such as the six-circle goniometer. Small scale can be done on a local X-ray tube source, typically coupled with an image plate detector. These have the advantage of being (relatively) inexpensive and easy to maintain, and allow for quick screening and collection of samples. However, the wavelength light produced is limited by anode material, typically copper. Further, intensity is limited by the power applied and cooling capacity available to avoid melting the anode. In such systems, electrons are boiled off of a cathode and accelerated through a strong electric potential of ~50 kV; having reached a high speed, the electrons collide with a metal plate, emitting bremsstrahlung and some strong spectral lines corresponding to the excitation of inner-shell electrons of the metal. The most common metal used is copper, which can be kept cool easily, due to its high thermal conductivity, and which produces strong Kα and Kβ lines. The Kβ line is sometimes suppressed with a thin (~10 µm) nickel foil. The simplest and cheapest variety of sealed X-ray tube has a stationary anode (the Crookes tube) and run with ~2 kW of electron beam power. The more expensive variety has a rotating-anode type source that run with ~14 kW of e-beam power. X-rays are generally filtered (by use of X-Ray Filters) to a single wavelength (made monochromatic) and collimated to a single direction before they are allowed to strike the crystal. The filtering not only simplifies the data analysis, but also removes radiation that degrades the crystal without contributing useful information. Collimation is done either with a collimator (basically, a long tube) or with a clever arrangement of gently curved mirrors. Mirror systems are preferred for small crystals (under 0.3 mm) or with large unit cells (over 150 Å) Synchrotron radiation are some of the brightest lights on earth. It is the single most powerful tool available to X-ray crystallographers. It is made of X-ray beams generated in large machines called synchrotrons. These machines accelerate electrically charged particles, often electrons, to nearly the speed of light and confine them in a (roughly) circular loop using magnetic fields. Synchrotrons are generally national facilities, each with several dedicated beamlines where data is collected without interruption. Synchrotrons were originally designed for use by high-energy physicists studying subatomic particles and cosmic phenomena. The largest component of each synchrotron is its electron storage ring. This ring is actually not a perfect circle, but a many-sided polygon. At each corner of the polygon, or sector, precisely aligned magnets bend the electron stream. As the electrons’ path is bent, they emit bursts of energy in the form of X-rays. Using synchrotron radiation frequently has specific requirements for X-ray crystallography. The intense ionizing radiation can cause radiation damage to samples, particularly macromolecular crystals. Cryo crystallography protects the sample from radiation damage, by freezing the crystal at liquid nitrogen temperatures (~100 K). However, synchrotron radiation frequently has the advantage of user selectable wavelengths, allowing for anomalous scattering experiments which maximizes anomalous signal. This is critical in experiments such as SAD and MAD. Free Electron Laser Recently, free electron lasers have been developed for use in X-ray crystallography. These are the brightest X-ray sources currently available; with the X-rays coming in femtosecond bursts. The intensity of the source is such that atomic resolution diffraction patterns can be resolved for crystals otherwise too small for collection. However, the intense light source also destroys the sample, requiring multiple crystals to be shot. As each crystal is randomly oriented in the beam, hundreds of thousands of individual diffraction images must be collected in order to get a complete data-set. This method, serial femtosecond crystallography, has been used in solving the structure of a number of protein crystal structures, sometimes noting differences with equivalent structures collected from synchrotron sources. Recording the reflections When a crystal is mounted and exposed to an intense beam of X-rays, it scatters the X-rays into a pattern of spots or reflections that can be observed on a screen behind the crystal. A similar pattern may be seen by shining a laser pointer at a compact disc. The relative intensities of these spots provide the information to determine the arrangement of molecules within the crystal in atomic detail. The intensities of these reflections may be recorded with photographic film, an area detector or with a charge-coupled device (CCD) image sensor. The peaks at small angles correspond to low-resolution data, whereas those at high angles represent high-resolution data; thus, an upper limit on the eventual resolution of the structure can be determined from the first few images. Some measures of diffraction quality can be determined at this point, such as the mosaicity of the crystal and its overall disorder, as observed in the peak widths. Some pathologies of the crystal that would render it unfit for solving the structure can also be diagnosed quickly at this point. One image of spots is insufficient to reconstruct the whole crystal; it represents only a small slice of the full Fourier transform. To collect all the necessary information, the crystal must be rotated step-by-step through 180°, with an image recorded at every step; actually, slightly more than 180° is required to cover reciprocal space, due to the curvature of the Ewald sphere. However, if the crystal has a higher symmetry, a smaller angular range such as 90° or 45° may be recorded. The rotation axis should be changed at least once, to avoid developing a "blind spot" in reciprocal space close to the rotation axis. It is customary to rock the crystal slightly (by 0.5–2°) to catch a broader region of reciprocal space. Multiple data sets may be necessary for certain phasing methods. For example, MAD phasing requires that the scattering be recorded at least three (and usually four, for redundancy) wavelengths of the incoming X-ray radiation. A single crystal may degrade too much during the collection of one data set, owing to radiation damage; in such cases, data sets on multiple crystals must be taken. Crystal symmetry, unit cell, and image scaling The recorded series of two-dimensional diffraction patterns, each corresponding to a different crystal orientation, is converted into a three-dimensional model of the electron density; the conversion uses the mathematical technique of Fourier transforms, which is explained below. Each spot corresponds to a different type of variation in the electron density; the crystallographer must determine which variation corresponds to which spot (indexing), the relative strengths of the spots in different images (merging and scaling) and how the variations should be combined to yield the total electron density (phasing). Data processing begins with indexing the reflections. This means identifying the dimensions of the unit cell and which image peak corresponds to which position in reciprocal space. A byproduct of indexing is to determine the symmetry of the crystal, i.e., its space group. Some space groups can be eliminated from the beginning. For example, reflection symmetries cannot be observed in chiral molecules; thus, only 65 space groups of 230 possible are allowed for protein molecules which are almost always chiral. Indexing is generally accomplished using an autoindexing routine. Having assigned symmetry, the data is then integrated. This converts the hundreds of images containing the thousands of reflections into a single file, consisting of (at the very least) records of the Miller index of each reflection, and an intensity for each reflection (at this state the file often also includes error estimates and measures of partiality (what part of a given reflection was recorded on that image)). A full data set may consist of hundreds of separate images taken at different orientations of the crystal. The first step is to merge and scale these various images, that is, to identify which peaks appear in two or more images (merging) and to scale the relative images so that they have a consistent intensity scale. Optimizing the intensity scale is critical because the relative intensity of the peaks is the key information from which the structure is determined. The repetitive technique of crystallographic data collection and the often high symmetry of crystalline materials cause the diffractometer to record many symmetry-equivalent reflections multiple times. This allows calculating the symmetry-related R-factor, a reliability index based upon how similar are the measured intensities of symmetry-equivalent reflections,[clarification needed] thus assessing the quality of the data. The data collected from a diffraction experiment is a reciprocal space representation of the crystal lattice. The position of each diffraction 'spot' is governed by the size and shape of the unit cell, and the inherent symmetry within the crystal. The intensity of each diffraction 'spot' is recorded, and this intensity is proportional to the square of the structure factor amplitude. The structure factor is a complex number containing information relating to both the amplitude and phase of a wave. In order to obtain an interpretable electron density map, both amplitude and phase must be known (an electron density map allows a crystallographer to build a starting model of the molecule). The phase cannot be directly recorded during a diffraction experiment: this is known as the phase problem. Initial phase estimates can be obtained in a variety of ways: - Ab initio phasing or direct methods – This is usually the method of choice for small molecules (<1000 non-hydrogen atoms), and has been used successfully to solve the phase problems for small proteins. If the resolution of the data is better than 1.4 Å (140 pm), direct methods can be used to obtain phase information, by exploiting known phase relationships between certain groups of reflections. - Molecular replacement – if a related structure is known, it can be used as a search model in molecular replacement to determine the orientation and position of the molecules within the unit cell. The phases obtained this way can be used to generate electron density maps. - Anomalous X-ray scattering (MAD or SAD phasing) – the X-ray wavelength may be scanned past an absorption edge of an atom, which changes the scattering in a known way. By recording full sets of reflections at three different wavelengths (far below, far above and in the middle of the absorption edge) one can solve for the substructure of the anomalously diffracting atoms and hence the structure of the whole molecule. The most popular method of incorporating anomalous scattering atoms into proteins is to express the protein in a methionine auxotroph (a host incapable of synthesizing methionine) in a media rich in seleno-methionine, which contains selenium atoms. A MAD experiment can then be conducted around the absorption edge, which should then yield the position of any methionine residues within the protein, providing initial phases. - Heavy atom methods (multiple isomorphous replacement) – If electron-dense metal atoms can be introduced into the crystal, direct methods or Patterson-space methods can be used to determine their location and to obtain initial phases. Such heavy atoms can be introduced either by soaking the crystal in a heavy atom-containing solution, or by co-crystallization (growing the crystals in the presence of a heavy atom). As in MAD phasing, the changes in the scattering amplitudes can be interpreted to yield the phases. Although this is the original method by which protein crystal structures were solved, it has largely been superseded by MAD phasing with selenomethionine. Model building and phase refinement Having obtained initial phases, an initial model can be built. This model can be used to refine the phases, leading to an improved model, and so on. Given a model of some atomic positions, these positions and their respective Debye-Waller factors (or B-factors, accounting for the thermal motion of the atom) can be refined to fit the observed diffraction data, ideally yielding a better set of phases. A new model can then be fit to the new electron density map and a further round of refinement is carried out. This continues until the correlation between the diffraction data and the model is maximized. The agreement is measured by an R-factor defined as where F is the structure factor. A similar quality criterion is Rfree, which is calculated from a subset (~10%) of reflections that were not included in the structure refinement. Both R factors depend on the resolution of the data. As a rule of thumb, Rfree should be approximately the resolution in angstroms divided by 10; thus, a data-set with 2 Å resolution should yield a final Rfree ~ 0.2. Chemical bonding features such as stereochemistry, hydrogen bonding and distribution of bond lengths and angles are complementary measures of the model quality. Phase bias is a serious problem in such iterative model building. Omit maps are a common technique used to check for this.[clarification needed] It may not be possible to observe every atom of the crystallized molecule – it must be remembered that the resulting electron density is an average of all the molecules within the crystal. In some cases, there is too much residual disorder in those atoms, and the resulting electron density for atoms existing in many conformations is smeared to such an extent that it is no longer detectable in the electron density map. Weakly scattering atoms such as hydrogen are routinely invisible. It is also possible for a single atom to appear multiple times in an electron density map, e.g., if a protein sidechain has multiple (<4) allowed conformations. In still other cases, the crystallographer may detect that the covalent structure deduced for the molecule was incorrect, or changed. For example, proteins may be cleaved or undergo post-translational modifications that were not detected prior to the crystallization. Deposition of the structure Once the model of a molecule's structure has been finalized, it is often deposited in a crystallographic database such as the Cambridge Structural Database (for small molecules), the Inorganic Crystal Structure Database (ICSD) (for inorganic compounds) or the Protein Data Bank (for protein structures). Many structures obtained in private commercial ventures to crystallize medicinally relevant proteins are not deposited in public crystallographic databases. The main goal of X-ray crystallography is to determine the density of electrons f(r) throughout the crystal, where r represents the three-dimensional position vector within the crystal. To do this, X-ray scattering is used to collect data about its Fourier transform F(q), which is inverted mathematically to obtain the density defined in real space, using the formula where the integral is taken over all values of q. The three-dimensional real vector q represents a point in reciprocal space, that is, to a particular oscillation in the electron density as one moves in the direction in which q points. The length of q corresponds to 2 divided by the wavelength of the oscillation. The corresponding formula for a Fourier transform will be used below where the integral is summed over all possible values of the position vector r within the crystal. The intensities of the reflections observed in X-ray diffraction give us the magnitudes |F(q)| but not the phases φ(q). To obtain the phases, full sets of reflections are collected with known alterations to the scattering, either by modulating the wavelength past a certain absorption edge or by adding strongly scattering (i.e., electron-dense) metal atoms such as mercury. Combining the magnitudes and phases yields the full Fourier transform F(q), which may be inverted to obtain the electron density f(r). Crystals are often idealized as being perfectly periodic. In that ideal case, the atoms are positioned on a perfect lattice, the electron density is perfectly periodic, and the Fourier transform F(q) is zero except when q belongs to the reciprocal lattice (the so-called Bragg peaks). In reality, however, crystals are not perfectly periodic; atoms vibrate about their mean position, and there may be disorder of various types, such as mosaicity, dislocations, various point defects, and heterogeneity in the conformation of crystallized molecules. Therefore, the Bragg peaks have a finite width and there may be significant diffuse scattering, a continuum of scattered X-rays that fall between the Bragg peaks. Intuitive understanding by Bragg's law An intuitive understanding of X-ray diffraction can be obtained from the Bragg model of diffraction. In this model, a given reflection is associated with a set of evenly spaced sheets running through the crystal, usually passing through the centers of the atoms of the crystal lattice. The orientation of a particular set of sheets is identified by its three Miller indices (h, k, l), and let their spacing be noted by d. William Lawrence Bragg proposed a model in which the incoming X-rays are scattered specularly (mirror-like) from each plane; from that assumption, X-rays scattered from adjacent planes will combine constructively (constructive interference) when the angle θ between the plane and the X-ray results in a path-length difference that is an integer multiple n of the X-ray wavelength λ. A reflection is said to be indexed when its Miller indices (or, more correctly, its reciprocal lattice vector components) have been identified from the known wavelength and the scattering angle 2θ. Such indexing gives the unit-cell parameters, the lengths and angles of the unit-cell, as well as its space group. Since Bragg's law does not interpret the relative intensities of the reflections, however, it is generally inadequate to solve for the arrangement of atoms within the unit-cell; for that, a Fourier transform method must be carried out. Scattering as a Fourier transform The incoming X-ray beam has a polarization and should be represented as a vector wave; however, for simplicity, let it be represented here as a scalar wave. We also ignore the complication of the time dependence of the wave and just concentrate on the wave's spatial dependence. Plane waves can be represented by a wave vector kin, and so the strength of the incoming wave at time t=0 is given by At position r within the sample, let there be a density of scatterers f(r); these scatterers should produce a scattered spherical wave of amplitude proportional to the local amplitude of the incoming wave times the number of scatterers in a small volume dV about r where S is the proportionality constant. Let's consider the fraction of scattered waves that leave with an outgoing wave-vector of kout and strike the screen at rscreen. Since no energy is lost (elastic, not inelastic scattering), the wavelengths are the same as are the magnitudes of the wave-vectors |kin|=|kout|. From the time that the photon is scattered at r until it is absorbed at rscreen, the photon undergoes a change in phase The net radiation arriving at rscreen is the sum of all the scattered waves throughout the crystal which may be written as a Fourier transform where q = kout – kin. The measured intensity of the reflection will be square of this amplitude Friedel and Bijvoet mates For every reflection corresponding to a point q in the reciprocal space, there is another reflection of the same intensity at the opposite point -q. This opposite reflection is known as the Friedel mate of the original reflection. This symmetry results from the mathematical fact that the density of electrons f(r) at a position r is always a real number. As noted above, f(r) is the inverse transform of its Fourier transform F(q); however, such an inverse transform is a complex number in general. To ensure that f(r) is real, the Fourier transform F(q) must be such that the Friedel mates F(−q) and F(q) are complex conjugates of one another. Thus, F(−q) has the same magnitude as F(q) but they have the opposite phase, i.e., φ(q) = −φ(q) The equality of their magnitudes ensures that the Friedel mates have the same intensity |F|2. This symmetry allows one to measure the full Fourier transform from only half the reciprocal space, e.g., by rotating the crystal slightly more than 180° instead of a full 360° revolution. In crystals with significant symmetry, even more reflections may have the same intensity (Bijvoet mates); in such cases, even less of the reciprocal space may need to be measured. In favorable cases of high symmetry, sometimes only 90° or even only 45° of data are required to completely explore the reciprocal space. The Friedel-mate constraint can be derived from the definition of the inverse Fourier transform Since Euler's formula states that eix = cos(x) + i sin(x), the inverse Fourier transform can be separated into a sum of a purely real part and a purely imaginary part The function f(r) is real if and only if the second integral Isin is zero for all values of r. In turn, this is true if and only if the above constraint is satisfied since Isin = −Isin implies that Isin=0. Each X-ray diffraction image represents only a slice, a spherical slice of reciprocal space, as may be seen by the Ewald sphere construction. Both kout and kin have the same length, due to the elastic scattering, since the wavelength has not changed. Therefore, they may be represented as two radial vectors in a sphere in reciprocal space, which shows the values of q that are sampled in a given diffraction image. Since there is a slight spread in the incoming wavelengths of the incoming X-ray beam, the values of|F(q)|can be measured only for q vectors located between the two spheres corresponding to those radii. Therefore, to obtain a full set of Fourier transform data, it is necessary to rotate the crystal through slightly more than 180°, or sometimes less if sufficient symmetry is present. A full 360° rotation is not needed because of a symmetry intrinsic to the Fourier transforms of real functions (such as the electron density), but "slightly more" than 180° is needed to cover all of reciprocal space within a given resolution because of the curvature of the Ewald sphere. In practice, the crystal is rocked by a small amount (0.25-1°) to incorporate reflections near the boundaries of the spherical Ewald's shells. A well-known result of Fourier transforms is the autocorrelation theorem, which states that the autocorrelation c(r) of a function f(r) has a Fourier transform C(q) that is the squared magnitude of F(q) Therefore, the autocorrelation function c(r) of the electron density (also known as the Patterson function) can be computed directly from the reflection intensities, without computing the phases. In principle, this could be used to determine the crystal structure directly; however, it is difficult to realize in practice. The autocorrelation function corresponds to the distribution of vectors between atoms in the crystal; thus, a crystal of N atoms in its unit cell may have N(N-1) peaks in its Patterson function. Given the inevitable errors in measuring the intensities, and the mathematical difficulties of reconstructing atomic positions from the interatomic vectors, this technique is rarely used to solve structures, except for the simplest crystals. Advantages of a crystal In principle, an atomic structure could be determined from applying X-ray scattering to non-crystalline samples, even to a single molecule. However, crystals offer a much stronger signal due to their periodicity. A crystalline sample is by definition periodic; a crystal is composed of many unit cells repeated indefinitely in three independent directions. Such periodic systems have a Fourier transform that is concentrated at periodically repeating points in reciprocal space known as Bragg peaks; the Bragg peaks correspond to the reflection spots observed in the diffraction image. Since the amplitude at these reflections grows linearly with the number N of scatterers, the observed intensity of these spots should grow quadratically, like N2. In other words, using a crystal concentrates the weak scattering of the individual unit cells into a much more powerful, coherent reflection that can be observed above the noise. This is an example of constructive interference. In a liquid, powder or amorphous sample, molecules within that sample are in random orientations. Such samples have a continuous Fourier spectrum that uniformly spreads its amplitude thereby reducing the measured signal intensity, as is observed in SAXS. More importantly, the orientational information is lost. Although theoretically possible, it is experimentally difficult to obtain atomic-resolution structures of complicated, asymmetric molecules from such rotationally averaged data. An intermediate case is fiber diffraction in which the subunits are arranged periodically in at least one dimension. Nobel Prizes for X-ray Crystallography |1914||Max von Laue||Physics||"For his discovery of the diffraction of X-rays by crystals", an important step in the development of X-ray spectroscopy.| |1915||William Henry Bragg||Physics||"For their services in the analysis of crystal structure by means of X-rays",| |1915||William Lawrence Bragg||Physics||"For their services in the analysis of crystal structure by means of X-rays",| |1962||Max F. Perutz||Chemistry||"for their studies of the structures of globular proteins"| |1962||John C. Kendrew||Chemistry||"for their studies of the structures of globular proteins"| |1962||James Dewey Watson||Medicine||"For their discoveries concerning the molecular structure of nucleic acids and its significance for information transfer in living material"| |1962||Francis Harry Compton Crick||Medicine||"For their discoveries concerning the molecular structure of nucleic acids and its significance for information transfer in living material"| |1962||Maurice Hugh Frederick Wilkins||Medicine||"For their discoveries concerning the molecular structure of nucleic acids and its significance for information transfer in living material"| |1964||Dorothy Hodgkin||Chemistry||"For her determinations by X-ray techniques of the structures of important biochemical substances"| |1972||Stanford Moore||Chemistry||"For their contribution to the understanding of the connection between chemical structure and catalytic activity of the active centre of the ribonuclease molecule"| |1972||William H. Stein||Chemistry||"For their contribution to the understanding of the connection between chemical structure and catalytic activity of the active centre of the ribonuclease molecule"| |1976||William N. Lipscomb||Chemistry||"For his studies on the structure of boranes illuminating problems of chemical bonding"| |1985||Jerome Karle||Chemistry||"For their outstanding achievements in developing direct methods for the determination of crystal structures"| |1985||Herbert A. Hauptman||Chemistry||"For their outstanding achievements in developing direct methods for the determination of crystal structures"| |1988||Johann Deisenhofer||Chemistry||"For their determination of the three-dimensional structure of a photosynthetic reaction centre"| |1988||Hartmut Michel||Chemistry||"For their determination of the three-dimensional structure of a photosynthetic reaction centre"| |1988||Robert Huber||Chemistry||"For their determination of the three-dimensional structure of a photosynthetic reaction centre"| |1997||John E. Walker||Chemistry||"For their elucidation of the enzymatic mechanism underlying the synthesis of adenosine triphosphate (ATP)"| |2003||Roderick MacKinnon||Chemistry||"For discoveries concerning channels in cell membranes [...] for structural and mechanistic studies of ion channels"| |2003||Peter Agre||Chemistry||"For discoveries concerning channels in cell membranes [...] for the discovery of water channels"| |2006||Roger D. Kornberg||Chemistry||"For his studies of the molecular basis of eukaryotic transcription"| |2009||Ada E. Yonath||Chemistry||"For studies of the structure and function of the ribosome"| |2009||Thomas A. Steitz||Chemistry||"For studies of the structure and function of the ribosome"| |2009||Venkatraman Ramakrishnan||Chemistry||"For studies of the structure and function of the ribosome"| |2012||Brian Kobilka||Chemistry||"For studies of G-protein-coupled receptors"| - Beevers–Lipson strip - Bragg diffraction - Bravais lattice - Crystallographic database - Crystallographic point groups - Difference density map - Dorothy Hodgkin - Electron crystallography - Electron diffraction - Energy Dispersive X-Ray Diffraction - Henderson limit - International Year of Crystallography - John Desmond Bernal - John Kendrew - Max Perutz - Max von Laue - Neutron diffraction - Powder diffraction - Rosalind Franklin - Scherrer equation - Small angle X-ray scattering (SAXS) - Structure determination - Ultrafast x-rays - Wide angle X-ray scattering (WAXS) - William Henry Bragg - William Lawrence Bragg - Kepler J (1611). Strena seu de Nive Sexangula. Frankfurt: G. Tampach. ISBN 3-321-00021-0. - Steno N (1669). De solido intra solidum naturaliter contento dissertationis prodromus. Florentiae. - Hessel JFC (1831). Kristallometrie oder Kristallonomie und Kristallographie. Leipzig. - Bravais A (1850). "Mémoire sur les systèmes formés par des points distribués regulièrement sur un plan ou dans l'espace". Journal de l'Ecole Polytechnique 19: 1. - Shafranovskii I I & Belov N V (1962). Paul Ewald, ed. "E. S. Fedorov" (PDF). 50 Years of X-Ray Diffraction (Springer): 351. ISBN 90-277-9029-9. - Schönflies A (1891). Kristallsysteme und Kristallstruktur. Leipzig. - Barlow W (1883). "Probable nature of the internal symmetry of crystals". Nature 29 (738): 186. Bibcode:1883Natur..29..186B. doi:10.1038/029186a0. See also Barlow, William (1883). "Probable Nature of the Internal Symmetry of Crystals". Nature 29 (739): 205. Bibcode:1883Natur..29..205B. doi:10.1038/029205a0. Sohncke, L. (1884). "Probable Nature of the Internal Symmetry of Crystals". Nature 29 (747): 383. Bibcode:1884Natur..29..383S. doi:10.1038/029383a0. Barlow, WM. (1884). "Probable Nature of the Internal Symmetry of Crystals". Nature 29 (748): 404. Bibcode:1884Natur..29..404B. doi:10.1038/029404b0. - Einstein A (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt" [A Heuristic Model of the Creation and Transformation of Light]. Annalen der Physik (in German) 17 (6): 132. Bibcode:1905AnP...322..132E. doi:10.1002/andp.19053220607.. An English translation is available from Wikisource. - Einstein A (1909). "Über die Entwicklung unserer Anschauungen über das Wesen und die Konstitution der Strahlung" [The Development of Our Views on the Composition and Essence of Radiation)]. Physikalische Zeitschrift (in German) 10: 817.. An English translation is available from Wikisource. - Pais A (1982). Subtle is the Lord: The Science and the Life of Albert Einstein. Oxford University Press. ISBN 0-19-853907-X. - Compton A (1923). "A Quantum Theory of the Scattering of X-rays by Light Elements". Phys. Rev. 21 (5): 483. Bibcode:1923PhRv...21..483C. doi:10.1103/PhysRev.21.483. - Bragg WH (1907). "The nature of Röntgen rays". Transactions of the Royal Society of Science of Australia 31: 94. - Bragg WH (1908). "The nature of γ- and X-rays". Nature 77 (1995): 270. Bibcode:1908Natur..77..270B. doi:10.1038/077270a0. See also Bragg, W. H. (1908). "The Nature of the γ and X-Rays". Nature 78 (2021): 271. Bibcode:1908Natur..78..271B. doi:10.1038/078271a0. Bragg, W. H. (1908). "The Nature of the γ and X-Rays". Nature 78 (2022): 293. Bibcode:1908Natur..78..293B. doi:10.1038/078293d0. Bragg, W. H. (1908). "The Nature of X-Rays". Nature 78 (2035): 665. Bibcode:1908Natur..78R.665B. doi:10.1038/078665b0. - Bragg WH (1910). "The consequences of the corpuscular hypothesis of the γ- and X-rays, and the range of β-rays". Phil. Mag. 20 (117): 385. doi:10.1080/14786441008636917. - Bragg WH (1912). "On the direct or indirect nature of the ionization by X-rays". Phil. Mag. 23 (136): 647. doi:10.1080/14786440408637253. - Friedrich W; Knipping P; von Laue M (1912). "Interferenz-Erscheinungen bei Röntgenstrahlen". Sitzungsberichte der Mathematisch-Physikalischen Classe der Königlich-Bayerischen Akademie der Wissenschaften zu München 1912: 303. - von Laue M (1914). "Concerning the detection of x-ray interferences" (PDF). Nobel Lectures, Physics. 1901–1921. Retrieved 2009-02-18. - Dana ES; Ford WE (1932). A Textbook of Mineralogy (fourth ed.). New York: John Wiley & Sons. p. 28. - Andre Guinier (1952). X-ray Crystallographic Technology. London: Hilger and Watts LTD. p. 271. - Bragg WL (1912). "The Specular Reflexion of X-rays". Nature 90 (2250): 410. Bibcode:1912Natur..90..410B. doi:10.1038/090410b0. - Bragg WL (1913). "The Diffraction of Short Electromagnetic Waves by a Crystal". Proceedings of the Cambridge Philosophical Society 17: 43. - Bragg (1914). "Die Reflexion der Röntgenstrahlen". Jahrbuch der Radioaktivität und Elektronik 11: 350. - Bragg (1913). "The Structure of Some Crystals as Indicated by their Diffraction of X-rays". Proc. R. Soc. Lond. A89 (610): 248–277. Bibcode:1913RSPSA..89..248B. doi:10.1098/rspa.1913.0083. JSTOR 93488. - Bragg WL; James RW; Bosanquet CH (1921). "The Intensity of Reflexion of X-rays by Rock-Salt". Phil. Mag. 41 (243): 309. doi:10.1080/14786442108636225. - Bragg WL; James RW; Bosanquet CH (1921). "The Intensity of Reflexion of X-rays by Rock-Salt. Part II". Phil. Mag. 42 (247): 1. doi:10.1080/14786442108633730. - Bragg WL; James RW; Bosanquet CH (1922). "The Distribution of Electrons around the Nucleus in the Sodium and Chlorine Atoms". Phil. Mag. 44 (261): 433. doi:10.1080/14786440908565188. - Bragg WH; Bragg WL (1913). "The structure of the diamond". Nature 91 (2283): 557. Bibcode:1913Natur..91..557B. doi:10.1038/091557a0. - Bragg WH; Bragg WL (1913). "The structure of the diamond". Proc. R. Soc. Lond. A89 (610): 277. Bibcode:1913RSPSA..89..277B. doi:10.1098/rspa.1913.0084. - Bragg WL (1914). "The Crystalline Structure of Copper". Phil. Mag. 28 (165): 355. doi:10.1080/14786440908635219. - Bragg WL (1914). "The analysis of crystals by the X-ray spectrometer". Proc. R. Soc. Lond. A89 (613): 468. Bibcode:1914RSPSA..89..468B. doi:10.1098/rspa.1914.0015. - Bragg WH (1915). "The structure of the spinel group of crystals". Phil. Mag. 30 (176): 305. doi:10.1080/14786440808635400. - Nishikawa S (1915). "Structure of some crystals of spinel group". Proc. Tokyo Math. Phys. Soc. 8: 199. - Vegard L (1916). "Results of Crystal Analysis". Phil. Mag. 32 (187): 65. doi:10.1080/14786441608635544. - Aminoff G (1919). "Crystal Structure of Pyrochroite". Stockholm Geol. Fören. Förh. 41: 407. - Aminoff G (1921). "Über die Struktur des Magnesiumhydroxids". Z. Kristallogr. 56: 505. - Bragg WL (1920). "The crystalline structure of zinc oxide". Phil. Mag. 39 (234): 647. doi:10.1080/14786440608636079. - Debije P, Scherrer P (1916). "Interferenz an regellos orientierten Teilchen im Röntgenlicht I". Physikalische Zeitschrift 17: 277. - Friedrich W (1913). "Eine neue Interferenzerscheinung bei Röntgenstrahlen". Physikalische Zeitschrift 14: 317. - Hull AW (1917). "A New Method of X-ray Crystal Analysis". Phys. Rev. 10 (6): 661. Bibcode:1917PhRv...10..661H. doi:10.1103/PhysRev.10.661. - Bernal JD (1924). "The Structure of Graphite". Proc. R. Soc. Lond. A106 (740): 749–773. JSTOR 94336. - Hassel O; Mack H (1924). "Über die Kristallstruktur des Graphits". Zeitschrift für Physik 25: 317. Bibcode:1924ZPhy...25..317H. doi:10.1007/BF01327534. - Hull AW (1917). "The Crystal Structure of Iron". Phys. Rev. 9: 84. doi:10.1103/PhysRev.9.83. - Hull AW (1917). "The Crystal Structure of Magnesium". PNAS 3 (7): 470. Bibcode:1917PNAS....3..470H. doi:10.1073/pnas.3.7.470. - Black, Susan AW (2005). "Domesticating the Crystal: Sir Lawrence Bragg and the Aesthetics of "X-ray Analysis"". Configurations 13 (2): 257. doi:10.1353/con.2007.0014. - "From Atoms To Patterns". Wellcome Collection. Archived from the original on September 7, 2013. Retrieved 17 October 2013. - Wyckoff RWG; Posnjak E (1921). "The Crystal Structure of Ammonium Chloroplatinate". J. Amer. Chem. Soc. 43 (11): 2292. doi:10.1021/ja01444a002. - Bragg WH (1921). "The structure of organic crystals". Proc. R. Soc. Lond. 34: 33. Bibcode:1921PPSL...34...33B. doi:10.1088/1478-7814/34/1/306. - Lonsdale K (1928). "The structure of the benzene ring". Nature 122 (3082): 810. Bibcode:1928Natur.122..810L. doi:10.1038/122810c0. - Pauling L. The Nature of the Chemical Bond (3rd ed.). Ithaca, NY: Cornell University Press. ISBN 0-8014-0333-2. - Bragg WH (1922). "The crystalline structure of anthracene". Proc. R. Soc. Lond. 35: 167. Bibcode:1922PPSL...35..167B. doi:10.1088/1478-7814/35/1/320. - Powell HM; Ewens RVG (1939). "The crystal structure of iron enneacarbonyl". J. Chem. Soc.: 286. doi:10.1039/jr9390000286. - Bertrand JA, Cotton, Dollase (1963). "The Metal-Metal Bonded, Polynuclear Complex Anion in CsReCl4". J. Amer. Chem. Soc. 85 (9): 1349. doi:10.1021/ja00892a029. - Robinson WT; Fergusson JE; Penfold BR (1963). "Configuration of Anion in CsReCl4". Proceedings of the Chemical Society of London: 116. - Cotton FA, Curtis, Harris, Johnson, Lippard, Mague, Robinson, Wood (1964). "Mononuclear and Polynuclear Chemistry of Rhenium (III): Its Pronounced Homophilicity". Science 145 (3638): 1305–7. Bibcode:1964Sci...145.1305C. doi:10.1126/science.145.3638.1305. PMID 17802015. - Cotton FA, Harris (1965). "The Crystal and Molecular Structure of Dipotassium Octachlorodirhenate(III) Dihydrate". Inorganic Chemistry 4 (3): 330. doi:10.1021/ic50025a015. - Cotton FA (1965). "Metal-Metal Bonding in [Re2X8]2− Ions and Other Metal Atom Clusters". Inorganic Chemistry 4 (3): 334. doi:10.1021/ic50025a016. - Eberhardt WH; Crawford W, Jr.; Lipscomb WN (1954). "The valence structure of the boron hydrides". J. Chem. Phys. 22 (6): 989. Bibcode:1954JChPh..22..989E. doi:10.1063/1.1740320. - Martin TW; Derewenda ZS (1999). "The name is Bond—H bond". Nature Structural Biology 6 (5): 403–6. doi:10.1038/8195. PMID 10331860. - Dunitz JD; Orgel LE; Rich A (1956). "The crystal structure of ferrocene". Acta Crystallographica 9 (4): 373. doi:10.1107/S0365110X56001091. - Seiler P; Dunitz JD (1979). "A new interpretation of the disordered crystal structure of ferrocene". Acta Crystallographica B 35 (5): 1068. doi:10.1107/S0567740879005598. - Wunderlich JA; Mellor DP (1954). "A note on the crystal structure of Zeise's salt". Acta Crystallographica 7: 130. doi:10.1107/S0365110X5400028X. - Jarvis JAJ; Kilbourn BT; Owston PG (1970). "A re-determination of the crystal and molecular structure of Zeise's salt, KPtCl3.C2H4.H2O. A correction". Acta Crystallographica B 26 (6): 876. doi:10.1107/S056774087000328X. - Jarvis JAJ; Kilbourn BT; Owston PG (1971). "A re-determination of the crystal and molecular structure of Zeise's salt, KPtCl3.C2H4.H2O". Acta Crystallographica B 27 (2): 366. doi:10.1107/S0567740871002231. - Love RA; Koetzle TF; Williams GJB; Andrews LC; Bau R (1975). "Neutron diffraction study of the structure of Zeise's salt, KPtCl3(C2H4).H2O". Inorganic Chemistry 14 (11): 2653. doi:10.1021/ic50153a012. - Brown, Dwayne (October 30, 2012). "NASA Rover's First Soil Studies Help Fingerprint Martian Minerals". NASA. Retrieved October 31, 2012. - Westgren A; Phragmén G (1925). "X-ray Analysis of the Cu-Zn, Ag-Zn and Au-Zn Alloys". Phil. Mag. 50: 311. - Bradley AJ; Thewlis J (1926). "The structure of γ-Brass". Proc. R. Soc. Lond. 112 (762): 678. Bibcode:1926RSPSA.112..678B. doi:10.1098/rspa.1926.0134. - Hume-Rothery W (1926). "Researches on the Nature, Properties and Conditions of Formation of Intermetallic Compounds (with special Reference to certain Compounds of Tin)". Journal of the Institute of Metals 35: 295. - Bradley AJ; Gregory CH (1927). "The Structure of certain Ternary Alloys". Nature 120 (3027): 678. Bibcode:1927Natur.120..678.. doi:10.1038/120678a0. - Westgren A (1932). "Zur Chemie der Legierungen". Angewandte Chemie 45 (2): 33. doi:10.1002/ange.19320450202. - Bernal JD (1935). "The Electron Theory of Metals". Annual Reports on the Progress of Chemistry 32: 181. - Pauling L (1923). "The Crystal Structure of Magnesium Stannide". J. Amer. Chem. Soc. 45 (12): 2777. doi:10.1021/ja01665a001. - Pauling L (1929). "The Principles Determining the Structure of Complex Ionic Crystals". J. Amer. Chem. Soc. 51 (4): 1010. doi:10.1021/ja01379a006. - Dickinson RG; Raymond AL (1923). "The Crystal Structure of Hexamethylene-Tetramine". J. Amer. Chem. Soc. 45: 22. doi:10.1021/ja01654a003. - Müller A (1923). "The X-ray Investigation of Fatty Acids". Journal of the Chemical Society (London) 123: 2043. doi:10.1039/ct9232302043. - Saville WB; Shearer G (1925). "An X-ray Investigation of Saturated Aliphatic Ketones". Journal of the Chemical Society (London) 127: 591. doi:10.1039/ct9252700591. - Bragg WH (1925). "The Investigation of thin Films by Means of X-rays". Nature 115 (2886): 266. Bibcode:1925Natur.115..266B. doi:10.1038/115266a0. - de Broglie M, Trillat JJ (1925). "Sur l'interprétation physique des spectres X d'acides gras". Comptes rendus hebdomadaires des séances de l'Académie des sciences 180: 1485. - Trillat JJ (1926). "Rayons X et Composeés organiques à longe chaine. Recherches spectrographiques sue leurs structures et leurs orientations". Annales de physique 6: 5. - Caspari WA (1928). "Crystallography of the Aliphatic Dicarboxylic Acids". Journal of the Chemical Society (London) ?: 3235. doi:10.1039/jr9280003235. - Müller A (1928). "X-ray Investigation of Long Chain Compounds (n. Hydrocarbons)". Proc. R. Soc. Lond. 120 (785): 437. Bibcode:1928RSPSA.120..437M. doi:10.1098/rspa.1928.0158. - Piper SH (1929). "Some Examples of Information Obtainable from the long Spacings of Fatty Acids". Transactions of the Faraday Society 25: 348. doi:10.1039/tf9292500348. - Müller A (1929). "The Connection between the Zig-Zag Structure of the Hydrocarbon Chain and the Alternation in the Properties of Odd and Even Numbered Chain Compounds". Proc. R. Soc. Lond. 124 (794): 317. Bibcode:1929RSPSA.124..317M. doi:10.1098/rspa.1929.0117. - Robertson JM (1936). "An X-ray Study of the Phthalocyanines, Part II". Journal of the Chemical Society: 1195. - Crowfoot Hodgkin D (1935). "X-ray Single Crystal Photographs of Insulin". Nature 135 (3415): 591. Bibcode:1935Natur.135..591C. doi:10.1038/135591a0. - Kendrew J. C., et al. (1958-03-08). "A Three-Dimensional Model of the Myoglobin Molecule Obtained by X-Ray Analysis". Nature 181 (4610): 662–6. Bibcode:1958Natur.181..662K. doi:10.1038/181662a0. PMID 13517261. - "Table of entries in the PDB, arranged by experimental method". - "PDB Statistics". RCSB Protein Data Bank. Retrieved 2010-02-09. - Scapin G (2006). "Structural biology and drug discovery". Curr. Pharm. Des. 12 (17): 2087–97. doi:10.2174/138161206777585201. PMID 16796557. - Lundstrom K (2006). "Structural genomics for membrane proteins". Cell. Mol. Life Sci. 63 (22): 2597–607. doi:10.1007/s00018-006-6252-y. PMID 17013556. - Lundstrom K (2004). "Structural genomics on membrane proteins: mini review". Comb. Chem. High Throughput Screen. 7 (5): 431–9. doi:10.2174/1386207043328634. PMID 15320710. - Cryogenic (<20 K) helium cooling mitigates radiation damage to protein crystals” Acta Crystallographica Section D. 2007 63 (4) 486-492 - J. Claydon, N. Greeves, S. Warren: Organic Chemistry 2nd edition page 45; Oxford University Press 2012 - Greninger AB (1935). "A back-reflection Laue method for determining crystal orientation". Zeitschrift für Kristallographie 91: 424. - An analogous diffraction pattern may be observed by shining a laser pointer on a compact disc or DVD; the periodic spacing of the CD tracks corresponds to the periodic arrangement of atoms in a crystal. - Miao, J., Charalambous, P., Kirz, J., & Sayre, D. (1999). "Extending the methodology of X-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens." Nature, 400(6742), 342. - Harp, JM; Timm, DE; Bunick, GJ (1998). "Macromolecular crystal annealing: overcoming increased mosaicity associated with cryocrystallography". Acta crystallographica D 54 (Pt 4): 622–8. doi:10.1107/S0907444997019008. PMID 9761858. - Harp, JM; Hanson, BL; Timm, DE; Bunick, GJ (1999). "Macromolecular crystal annealing: evaluation of techniques and variables". Acta Crystallographica D 55 (Pt 7): 1329–34. doi:10.1107/S0907444999005442. PMID 10393299. - Hanson, BL; Harp, JM; Bunick, GJ (2003). "The well-tempered protein crystal: annealing macromolecular crystals". Methods in enzymology. Methods in Enzymology 368: 217–35. doi:10.1016/S0076-6879(03)68012-2. ISBN 978-0-12-182271-2. PMID 14674276. - Geerlof A, et al. (2006). "The impact of protein characterization in structural proteomics". Acta Crystallographica D 62 (Pt 10): 1125–36. doi:10.1107/S0907444906030307. PMID 17001090. - Chernov AA (2003). "Protein crystals and their growth". J. Struct. Biol. 142 (1): 3–21. doi:10.1016/S1047-8477(03)00034-0. PMID 12718915. - Rupp B; Wang J (2004). "Predictive models for protein crystallization". Methods 34 (3): 390–407. doi:10.1016/j.ymeth.2004.03.031. PMID 15325656. - Chayen NE (2005). "Methods for separating nucleation and growth in protein crystallization". Prog. Biophys. Mol. Biol. 88 (3): 329–37. doi:10.1016/j.pbiomolbio.2004.07.007. PMID 15652248. - Stock D; Perisic O; Lowe J (2005). "Robotic nanolitre protein crystallisation at the MRC Laboratory of Molecular Biology". Prog Biophys Mol Biol 88 (3): 311–27. doi:10.1016/j.pbiomolbio.2004.07.009. PMID 15652247. - Jeruzalmi D (2006). "First analysis of macromolecular crystals: biochemistry and x-ray diffraction". Methods Mol. Biol. 364: 43–62. doi:10.1385/1-59745-266-1:43. ISBN 1-59745-266-1. PMID 17172760. - Helliwell JR (2005). "Protein crystal perfection and its application". Acta Crystallographica D 61 (Pt 6): 793–8. doi:10.1107/S0907444905001368. PMID 15930642. - Garman, E. F.; Schneider, T. R. (1997). "Macromolecular Cryocrystallography". Journal of Applied Crystallography 30 (3): 211. doi:10.1107/S0021889897002677. - Schlichting, I; Miao, J (2012). "Emerging opportunities in structural biology with X-ray free-electron lasers". Current Opinion in Structural Biology 22 (5): 613–26. doi:10.1016/j.sbi.2012.07.015. PMC 3495068. PMID 22922042. - Neutze, R; Wouts, R; Van Der Spoel, D; Weckert, E; Hajdu, J (2000). "Potential for biomolecular imaging with femtosecond X-ray pulses". Nature 406 (6797): 752–7. Bibcode:2000Natur.406..752N. doi:10.1038/35021099. PMID 10963603. - Liu, W; Wacker, D; Gati, C; Han, G. W.; James, D; Wang, D; Nelson, G; Weierstall, U; Katritch, V; Barty, A; Zatsepin, N. A.; Li, D; Messerschmidt, M; Boutet, S; Williams, G. J.; Koglin, J. E.; Seibert, M. M.; Wang, C; Shah, S. T.; Basu, S; Fromme, R; Kupitz, C; Rendek, K. N.; Grotjohann, I; Fromme, P; Kirian, R. A.; Beyerlein, K. R.; White, T. A.; Chapman, H. N.; et al. (2013). "Serial femtosecond crystallography of G protein-coupled receptors". Science 342 (6165): 1521–4. Bibcode:2013Sci...342.1521L. doi:10.1126/science.1244142. PMC 3902108. PMID 24357322. - Ravelli RB; Garman EF (2006). "Radiation damage in macromolecular cryocrystallography". Curr. Opin. Struct. Biol. 16 (5): 624–9. doi:10.1016/j.sbi.2006.08.001. PMID 16938450. - Powell HR (1999). "The Rossmann Fourier autoindexing algorithm in MOSFLM". Acta Crystallographica D 55 (Pt 10): 1690–5. doi:10.1107/S0907444999009506. PMID 10531518. - Hauptman H (1997). "Phasing methods for protein crystallography". Curr. Opin. Struct. Biol. 7 (5): 672–80. doi:10.1016/S0959-440X(97)80077-2. PMID 9345626. - Usón I; Sheldrick GM (1999). "Advances in direct methods for protein crystallography". Curr. Opin. Struct. Biol. 9 (5): 643–8. doi:10.1016/S0959-440X(99)00020-2. PMID 10508770. - Taylor G (2003). "The phase problem". Acta Crystallographica D 59 (11): 1881. doi:10.1107/S0907444903017815. - Ealick SE (2000). "Advances in multiple wavelength anomalous diffraction crystallography". Current Opinion in Chemical Biology 4 (5): 495–9. doi:10.1016/S1367-5931(00)00122-8. PMID 11006535. - Patterson AL (1935). "A Direct Method for the Determination of the Components of Interatomic Distances in Crystals". Zeitschrift für Kristallographie 90: 517. doi:10.1524/zkri.19126.96.36.1997. - "The Nobel Prize in Physics 1914". Nobel Foundation. Retrieved 2008-10-09. - "The Nobel Prize in Physics 1915". Nobel Foundation. Retrieved 2008-10-09. - "The Nobel Prize in Chemistry 1962". Nobelprize.org. Retrieved 2008-10-06. - "The Nobel Prize in Physiology or Medicine 1962". Nobel Foundation. Retrieved 2007-07-28. - "The Nobel Prize in Chemistry 1964". Nobelprize.org. Retrieved 2008-10-06. - "The Nobel Prize in Chemistry 1972". Nobelprize.org. Retrieved 2008-10-06. - "The Nobel Prize in Chemistry 1976". Nobelprize.org. Retrieved 2008-10-06. - "The Nobel Prize in Chemistry 1985". Nobelprize.org. Retrieved 2008-10-06. - "The Nobel Prize in Chemistry 1988". Nobelprize.org. Retrieved 2008-10-06. - "The Nobel Prize in Chemistry 1997". Nobelprize.org. Retrieved 2008-10-06. - "The Nobel Prize in Chemistry 2003". Nobelprize.org. Retrieved 2008-10-06. - "The Nobel Prize in Chemistry 2006". Nobelprize.org. Retrieved 2008-10-06. - "The Nobel Prize in Chemistry 2009". Nobelprize.org. Retrieved 2009-10-07. - "The Nobel Prize in Chemistry 2012". Nobelprize.org. Retrieved 2012-10-13. International Tables for Crystallography - Theo Hahn, ed. (2002). International Tables for Crystallography. Volume A, Space-group Symmetry (5th ed.). Dordrecht: Kluwer Academic Publishers, for the International Union of Crystallography. ISBN 0-7923-6590-9. - Michael G. Rossmann; Eddy Arnold, eds. (2001). International Tables for Crystallography. Volume F, Crystallography of biological molecules. Dordrecht: Kluwer Academic Publishers, for the International Union of Crystallography. ISBN 0-7923-6857-6. - Theo Hahn, ed. (1996). International Tables for Crystallography. Brief Teaching Edition of Volume A, Space-group Symmetry (4th ed.). Dordrecht: Kluwer Academic Publishers, for the International Union of Crystallography. ISBN 0-7923-4252-6. Bound collections of articles - Charles W. Carter; Robert M. Sweet., eds. (1997). Macromolecular Crystallography, Part A (Methods in Enzymology, v. 276). San Diego: Academic Press. ISBN 0-12-182177-3. - Charles W. Carter Jr.; Robert M. Sweet., eds. (1997). Macromolecular Crystallography, Part B (Methods in Enzymology, v. 277). San Diego: Academic Press. ISBN 0-12-182178-1. - A. Ducruix; R. Giegé, eds. (1999). Crystallization of Nucleic Acids and Proteins: A Practical Approach (2nd ed.). Oxford: Oxford University Press. ISBN 0-19-963678-8. - B.E. Warren (1969). X-ray Diffraction. New York. ISBN 0-486-66317-5. - Blow D (2002). Outline of Crystallography for Biologists. Oxford: Oxford University Press. ISBN 0-19-851051-9. - Burns G.; Glazer A M (1990). Space Groups for Scientists and Engineers (2nd ed.). Boston: Academic Press, Inc. ISBN 0-12-145761-3. - Clegg W (1998). Crystal Structure Determination (Oxford Chemistry Primer). Oxford: Oxford University Press. ISBN 0-19-855901-1. - Cullity B.D. (1978). Elements of X-Ray Diffraction (2nd ed.). Reading, Massachusetts: Addison-Wesley Publishing Company. ISBN 0-534-55396-6. - Drenth J (1999). Principles of Protein X-Ray Crystallography. New York: Springer-Verlag. ISBN 0-387-98587-5. - Giacovazzo C (1992). Fundamentals of Crystallography. Oxford: Oxford University Press. ISBN 0-19-855578-4. - Glusker JP; Lewis M; Rossi M (1994). Crystal Structure Analysis for Chemists and Biologists. New York: VCH Publishers. ISBN 0-471-18543-4. - Massa W (2004). Crystal Structure Determination. Berlin: Springer. ISBN 3-540-20644-2. - McPherson A (1999). Crystallization of Biological Macromolecules. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press. ISBN 0-87969-617-6. - McPherson A (2003). Introduction to Macromolecular Crystallography. John Wiley & Sons. ISBN 0-471-25122-4. - McRee DE (1993). Practical Protein Crystallography. San Diego: Academic Press. ISBN 0-12-486050-8. - O'Keeffe M; Hyde B G (1996). Crystal Structures; I. Patterns and Symmetry. Washington, DC: Mineralogical Society of America, Monograph Series. ISBN 0-939950-40-5. - Rhodes G (2000). Crystallography Made Crystal Clear. San Diego: Academic Press. ISBN 0-12-587072-8., PDF copy of select chapters - Rupp B (2009). Biomolecular Crystallography: Principles, Practice and Application to Structural Biology. New York: Garland Science. ISBN 0-8153-4081-8. - Zachariasen WH (1945). Theory of X-ray Diffraction in Crystals. New York: Dover Publications. LCCN 67026967. Applied computational data analysis - Young, R.A., ed. (1993). The Rietveld Method. Oxford: Oxford University Press & International Union of Crystallography. ISBN 0-19-855577-6. - Bijvoet JM, Burgers WG, Hägg G, eds. (1969). Early Papers on Diffraction of X-rays by Crystals I. Utrecht: published for the International Union of Crystallography by A. Oosthoek's Uitgeversmaatschappij N.V. - Bijvoet JM; Burgers WG; Hägg G, eds. (1972). Early Papers on Diffraction of X-rays by Crystals II. Utrecht: published for the International Union of Crystallography by A. Oosthoek's Uitgeversmaatschappij N.V. - Bragg W L; Phillips D C & Lipson H (1992). The Development of X-ray Analysis. New York: Dover. ISBN 0-486-67316-2. - Ewald, PP, and numerous crystallographers, eds. (1962). Fifty Years of X-ray Diffraction. Utrecht: published for the International Union of Crystallography by A. Oosthoek's Uitgeversmaatschappij N.V. doi:10.1007/978-1-4615-9961-6. ISBN 978-1-4615-9963-0. - Ewald, P. P., editor 50 Years of X-Ray Diffraction (Reprinted in pdf format for the IUCr XVIII Congress, Glasgow, Scotland, International Union of Crystallography). - Friedrich W (1922). "Die Geschichte der Auffindung der Röntgenstrahlinterferenzen". Die Naturwissenschaften 10 (16): 363. Bibcode:1922NW.....10..363F. doi:10.1007/BF01565289. - Lonsdale, K (1949). Crystals and X-rays. New York: D. van Nostrand. - "The Structures of Life". U.S. Department of Health and Human Services. 2007. |Library resources about |Wikibooks has a book on the topic of: Xray Crystallography| - Learning Crystallography - Simple, non technical introduction[dead link] - The Crystallography Collection, video series from the Royal Institution - "Small Molecule Crystalization" (PDF) at Illinois Institute of Technology website - International Union of Crystallography - Crystallography 101 - Interactive structure factor tutorial, demonstrating properties of the diffraction pattern of a 2D crystal. - Picturebook of Fourier Transforms, illustrating the relationship between crystal and diffraction pattern in 2D. - Lecture notes on X-ray crystallography and structure determination - Online lecture on Modern X-ray Scattering Methods for Nanoscale Materials Analysis by Richard J. Matyi - Interactive Crystallography Timeline from the Royal Institution - Crystallography Open Database (COD) - Protein Data Bank[dead link] (PDB) - Nucleic Acid Databank (NDB) - Cambridge Structural Database (CSD) - Inorganic Crystal Structure Database (ICSD) - Biological Macromolecule Crystallization Database[dead link] (BMCD) - Proteopedia – the collaborative, 3D encyclopedia of proteins and other molecules - RNABase[dead link] - HIC-Up database of PDB ligands - Structural Classification of Proteins database - CATH Protein Structure Classification - List of transmembrane proteins with known 3D structure - Orientations of Proteins in Membranes database - MolProbity structural validation suite - NQ-Flipper (check for unfavorable rotamers of Asn and Gln residues) - DALI server (identifies proteins similar to a given protein)
https://en.wikipedia.org/wiki/X-ray_diffraction
4.09375
The Chain Rule in Leibniz Notation We stated the chain rule first in Lagrange notation. Since Leibniz notation lets us be a little more precise about what we're differentiating and what we're differentiating with respect to, we need to also be comfortable with the chain rule in Leibniz notation. Suppose y is a function of x: y = g(x) and z is a function of y: z = f(y) Then z is a function of x: z = f(y) = f(g(x)) Once again, we have an outside function and an inside function. The chain rule in Lagrange notation states that (f(g(x))' = f ' (g(x)) · g ' (x). In Leibniz notation, we would say - and (f(g(x))' both mean "the derivative of z with respect to x," - and f ' (g(x)) = f ' (y) both mean "the derivative of z with respect to y," and - and g ' (x) both mean "the derivative of y with respect to x," the two statements of the chain rule do mean the same thing. We can remember the chain rule in Leibniz notation because it looks like a nice fraction equation where the dy terms cancel: This may or may not be what's actually going on, but it works for our purposes and it's a great memory aid. There are three steps to apply the chain rule in this form: - determine what y is (this is the same step as determining the inside and outside functions) - apply the chain rule formula - put everything in terms of the correct variable (for example, writing y in terms of x) The chain rule will be especially useful when we discuss related rates, where there will be problems with three different variables that all depend on each other in funny ways. We can also use the chain rule with different letters, as long as we put the letters in the correct places. If we have y = g(x) and z = f(y) = f(g(x)), the chain rule says The inside function is the one that "cancels out": The innermost variable, x, goes only in the denominators: and the outermost variable, z, goes only in the numerators: If we switch the letters, we need to make sure they go in the appropriate places. The important thing is that the inside function needs to be the term that cancels out: Now that we know how to write the chain rule with different letters, we can use it to find derivatives.
http://www.shmoop.com/computing-derivatives/chain-rule-leibniz.html
4.125
ToUpper converts all characters to uppercase characters. It causes a copy to be made of the VB.NET String, which is returned. We look at ToUpper and its behavior on non-lowercase characters. Example. This simple console program shows the result of ToUpper on the input String "abc123". Notice how "abc" are the only characters that were changed. The non-lowercase letters are not changed. Also:Characters that are already uppercase are not changed by the ToUpper Function.Char Based on: .NET 4.6 VB.NET program that calls ToUpper on String Dim value1 As String = "abc123" Dim upper1 As String = value1.ToUpper() Uppercased. How can you determine if a String is already uppercase? This is possible by using ToUpper and then comparing the results of that against the original String. If they are equal, the string was already uppercased. However, this is not the most efficient way. A faster way uses a For-loop and then the Char.IsLower function. If a Char is lowercase, the String is not already uppercase. It would return False early at that point. Summary. We explored some aspects of the ToUpper function. This function changes no characters except lowercase characters. Digits and uppercase letters (as well as punctuation and spaces) are left the same. Finally:We noted how to test Strings with a For-loop to see if they are uppercase.
http://www.dotnetperls.com/toupper-vbnet
4.09375
Large Electron–Positron Collider |Intersecting Storage Rings||CERN, 1971–1984| |Super Proton Synchrotron||CERN, 1981–1984| |ISABELLE||BNL, cancelled in 1983| |Relativistic Heavy Ion Collider||BNL, 2000–present| |Superconducting Super Collider||Cancelled in 1993| |Large Hadron Collider||CERN, 2009–present| |Very Large Hadron Collider||Theoretical| The Large Electron–Positron Collider (LEP) was one of the largest particle accelerators ever constructed. It was built at CERN, a multi-national centre for research in nuclear and particle physics near Geneva, Switzerland. LEP collided electrons with positrons at energies that reached 209 GeV. It was a circular collider with a circumference of 27 kilometres built in a tunnel roughly 100 m (300 ft) underground and passing through Switzerland and France. LEP was used from 1989 until 2000. Around 2001 it was dismantled to make way for the LHC, which re-used the LEP tunnel. To date, LEP is the most powerful accelerator of leptons ever built. LEP was a circular lepton collider – the most powerful such ever built. For context, modern colliders can be generally categorized based on their shape (circular or linear) and on what types of particles they accelerate and collide (leptons or hadrons). Leptons are point particles and are relatively light. Because they are point particles, their collisions are clean and amenable to precise measurements; however, because they are light, the collisions cannot reach the same energy that can be achieved with heavier particles. Hadrons are composite particles (composed of quarks) and are relatively heavy; protons, for example, have a mass 2000 times greater than electrons. Because of their higher mass, they can be accelerated to much higher energies, which is the key to directly observing new particles or interactions that are not predicted by currently accepted theories. However, hadron collisions are very messy (there are often lots of unrelated tracks, for example, and it is not straightforward to determine the energy of the collisions), and therefore more challenging to analyze and less amenable to precision measurements. The shape of the collider is also important. High energy physics colliders collect particles into bunches, and then collide the bunches together. However, only a very tiny fraction of particles in each bunch actually collide. In circular colliders, these bunches travel around a roughly circular shape in opposite directions and therefore can be collided over and over. This enables a high rate of collisions and facilitates collection of a large amount of data, which is important for precision measurements or for observing very rare decays. However, the energy of the bunches is limited due to losses from synchrotron radiation. In linear colliders, particles move in a straight line and therefore do not suffer from synchrotron radiation, but bunches cannot be re-used and it is therefore more challenging to collect large amounts of data. As a circular lepton collider, LEP was well suited for precision measurements of the electroweak interaction at energies that were not previously achievable. When the LEP collider started operation in August 1989 it accelerated the electrons and positrons to a total energy of 45 GeV each to enable production of the Z boson, which has a mass of 91 GeV. The accelerator was upgraded later to enable production of a pair of W bosons, each having a mass of 80 GeV. LEP collider energy eventually topped at 209 GeV at the end in 2000. At a Lorentz factor ( = particle energy/rest mass = [104.5 GeV/0.511 MeV]) of over 200,000, LEP still holds the particle accelerator speed record, extremely close to the limiting speed of light. At the end of 2000, LEP was shut down and then dismantled in order to make room in the tunnel for the construction of the Large Hadron Collider (LHC). The Super Proton Synchrotron (an older ring collider) was used to accelerate electrons and positrons to nearly the speed of light. These are then injected into the ring. As in all ring colliders, the LEP's ring consists of many magnets which force the charged particles into a circular trajectory (so that they stay inside the ring), RF accelerators which accelerate the particles with radio frequency waves, and quadrupoles that focus the particle beam (i.e. keep the particles together). The function of the accelerators is to increase the particles' energies so that heavy particles can be created when the particles collide. When the particles are accelerated to maximum energy (and focused to so-called bunches), an electron and a positron bunch is made to collide with each other at one of the collision points of the detector. When an electron and a positron collide, they annihilate to a virtual particle, either a photon or a Z boson. The virtual particle almost immediately decays into other elementary particles, which are then detected by huge particle detectors. The Large Electron–Positron Collider had four detectors, built around the four collision points within underground halls. Each was the size of a small house and was capable of registering the particles by their energy, momentum and charge, thus allowing physicists to infer the particle reaction that had happened and the elementary particles involved. By performing statistical analysis of this data, knowledge about elementary particle physics is gained. The four detectors of LEP were called Aleph, Delphi, Opal, and L3. They were built differently to allow for complementary experiments. ALEPH stands for Apparatus for LEP PHysics at CERN. The detector determined the mass of the W-boson and Z-boson to within one part in a thousand. The number of families of particles with light neutrinos was determined to be ±0.013, which is consistent with the 2.982standard model value of 3. The running of the quantum chromodynamics (QCD) coupling constant was measured at various energies and found to run in accordance with perturbative calculations in QCD. DELPHI stands for DEtector with Lepton, Photon and Hadron Identification. OPAL stands for Omni-Purpose Apparatus for LEP. The name of the experiment was a play, as some of the founding members of the scientific collaboration which first proposed the design had previously worked on the JADE detector at DESY in Hamburg. OPAL was a general-purpose detector designed to collect a broad range of data. Its data were used to make high precision measurements of the Z boson lineshape, perform detailed tests of the Standard Model, and place limits on new physics. The detector was dismantled in 2000 to make way for LHC equipment. The lead glass blocks from the OPAL barrel electromagnetic calorimeter are currently being re-used in the large-angle photon veto detectors at the NA62 experiment at CERN. The results of the LEP experiments allowed precise values of many quantities of the Standard Model—most importantly the mass of the Z boson and the W boson (which were discovered in 1983 at an earlier CERN collider [the Intersecting Storage Rings project]) to be obtained—and so confirm the Model and put it on a solid basis of empirical data. A not quite discovery of the Higgs boson Near the end of the scheduled run time, data suggested tantalizing but inconclusive hints that the Higgs particle of a mass around 115 GeV might have been observed, a sort of Holy Grail of current high-energy physics. The run-time was extended for a few months, to no avail. The strength of the signal remained at 1.7 standard deviations which translates to the 91% confidence level, much less than the confidence expected by particle physicists to claim a discovery, and was at the extreme upper edge of the detection range of the experiments with the collected LEP data. There was a proposal to extend the LEP operation by another year in order to seek confirmation, which would have delayed the start of the LHC. However, the decision was made to shut down LEP and progress with the LHC as planned. For years, this observation was the only hint of a Higgs Boson; subsequent experiments until 2010 at the Tevatron had not been sensitive enough to confirm or refute these hints. Beginning in July 2012, however, the ATLAS and CMS experiments at LHC presented evidence of a Higgs particle around 125 GeV, and strongly excluded the 115 GeV region. - http://sl-div.web.cern.ch/sl-div/history/lep_doc.html CERN 1990 historical reference with much information on the design issues and details of LEP. - "Welcome to ALEPH". Retrieved 2011-09-14. - "The OPAL Experiment at LEP 1989–2000". Retrieved 2011-09-14. - "L3 Homepage". Retrieved 2011-09-14. - CDF Collaboration, D0 Collaboration, Tevatron New Physics, Higgs Working Group (2010-06-26). "Combined CDF and D0 Upper Limits on Standard Model Higgs-Boson Production with up to 6.7 fb−1 of Data". arXiv:1007.4587 [hep-ex]. - LEP Working Groups - The LEP Collider from Design to Approval and Commissioning excerpts from the John Adams memorial lecture delivered at CERN on 26 November 1990 - A short but good (though slightly outdated) overview (with nice photographs) about LEP and related subjects can be found in this online booklet of the British Particle Physics and Astronomy Research Council.
https://en.wikipedia.org/wiki/L3_(CERN)
4.09375
Animals could be used to predict earthquakes because certain species are able to sense chemical changes in groundwater immediately before seismic activity, a study suggests. Experts began investigating the theory after a colony of toads was observed abandoning a pond in L’Aquila, Italy, in 2009, days before the devastating earthquake. They believe that stressed rocks in the Earth’s crust release charged particles before an earthquake, which react with groundwater. Animals living in or near groundwater, such as toads, are highly sensitive to such changes and may therefore notice signs of an impending quake. The researchers, led by Friedemann Freund from Nasa and Rachel Grant from the UK’s Open University, hope their findings will inspire biologists and geologists to work together in improving earthquake prediction. Although not the first example of abnormal animal activity observed prior to earthquakes, the case of the L’Aquila toads was different in that they were being studied in detail at the time Miss Grant, a biologist, was monitoring the toad colony as part of her PhD project in the days before the Italian earthquake disaster. "It was very dramatic. It went from 96 toads to almost zero over three days. After that, I was contacted by Nasa," she told the BBC. Scientists at the US space agency had been studying the chemical changes that occur when rocks are put under extreme stress and questioned whether they were linked to the toads’ departure. Lab tests have since suggested that changes in the Earth’s crust could have directly affected the chemistry of the pond that the toads were living and breeding in at the time. Dr Freund, a Nasa geophysicist, said that the charged particles, released from stressed rocks, react with the air when they reach the Earth’s surface, converting air molecules into charged particles known as ions. "Positive airborne ions are known in the medical community to cause headaches and nausea in humans and to increase the level of serotonin, a stress hormone, in the blood of animals," said Dr Freund. They can also react with water, turning it into hydrogen peroxide, the scientist added. This chemical chain of events could affect the organic material dissolved in the pond water, turning harmless organic material into substances that are toxic to aquatic animals. Pegida's Multi - Culti (state) Agenda! 2016-02-08 4:49 This guy raises some very interesting points regarding the recent PEGIDA launch in the UK and around Europe. Make sure to check out the videos below. The focus on the criticism descends into a Nazi accusation contest. "No no THEY are the REAL Nazi's." Pegida UK is fronted by Tommy Robinson, Paul Weston and Anne Marie Waters. They held a demo in ... Sweden plans to expel up to 80,000 asylum-seekers (that didn't seek asylum) 2016-02-08 3:58 Sweden intends to expel up to 80,000 migrants who arrived in 2015 and whose application for asylum has been rejected, Interior Minister Anders Ygeman said Wednesday. Ed: Wait, so they are in the country despite being rejected asylum? How did that happen and who let them in then? "We are talking about 60,000 people but the number could climb to 80,000," the ... An Occupied Country 2016-02-06 5:42 When people refer to occupation governments or occupied countries, the first thought is often of military occupation—the garrisoning of foreign troops in one’s cities and civil administration by their military executives. The other vision is the trope of a cabal of Haredim sitting in a darkly-lit boardroom with a map of the world on the wall, a dated reading of ... Immortal Symbols 1941 2016-02-06 2:07 Youtube description: Dutch film "Eeuwig Leevende Tekens" by Hamer - "Volksche Werkgemeenschap" (Folkish Study Group) - ancestral heritage, solar wheel, sun cross, tree of life, etc. Subs by Otharus - http://fryskednis.blogspot.com
http://www.redicecreations.com/article.php?id=17810
4.09375
Wildlife conservation is the practice of protecting wild plant and animal species and their habitats. The goal of wildlife conservation is to ensure that nature will be around for future generations to enjoy and also to recognize the importance of wildlife and wilderness for humans and other species alike. Many nations have government agencies and NGO's dedicated to wildlife conservation, which help to implement policies designed to protect wildlife. Numerous independent non-profit organizations also promote various wildlife conservation causes. According to the National Wildlife Federation, wildlife in the United States gets a majority of their funding through appropriations from the federal budget, annual federal and state grants, and financial efforts from programs such as the Conservation Reserve Program, Wetlands Reserve Program and Wildlife Habitat Incentive Program. Furthermore, a substantial amount of funding comes from the state through the sale of hunting/fishing licenses, game tags, stamps, and excise taxes from the purchase of hunting equipment and ammunition, which collects around $200 million annually. Wildlife conservation has become an increasingly important practice due to the negative effects of human activity on wildlife. The science of extinction is called dirology. An endangered species is defined as a population of a living species that is in the danger of becoming extinct because of several reasons.Some of The reasons can be, that 1. the species have a very low population, or 2. they are threatened by the varying environmental or prepositional parameters. Major dangers to wildlife Fewer natural wildlife habitat areas remain each year. Moreover, the habitat that remains has often been degraded to bear little resemblance to the wild areas which existed in the past.Habitat loss—due to destruction, fragmentation and degradation of habitat—is the primary threat to the survival of wildlife in the United States. When an ecosystem has an ecosystem) are some of the ways habitats can become so degraded that they no longer support native wildlife. - Climate change: Global warming is making hot days hotter, rainfall and flooding heavier, hurricanes stronger and droughts more severe. This intensification of weather and climate extremes will be the most visible impact of global warming in our everyday lives. It is also causing dangerous changes to the landscape of our world, adding stress to wildlife species and their habitat. Since many types of plants and animals have specific habitat requirements, climate change could cause disastrous loss of wildlife species. A slight drop or rise in average rainfall will translate into large seasonal changes. Hibernating mammals, reptiles, amphibians and insects are harmed and disturbed. Plants and wildlife are sensitive to moisture change so, they will be harmed by any change in moisture level. Natural phenomena like floods, earthquakes, volcanoes, lightning, forest fires. - Unregulated Hunting and poaching: Unregulated hunting and poaching causes a major threat to wildlife. Along with this, mismanagement of forest department and forest guards triggers this problem. - Pollution: Pollutants released into the environment are ingested by a wide variety of organisms. Pesticides and toxic chemical being widely used, making the environment toxic to certain plants, insects, and rodents. - Perhaps the largest threat is the extreme growing indifference of the public to wildlife, conservation and environmental issues in general. Over-exploitation of resources, i.e., exploitation of wild populations for food has resulted in population crashes (over-fishing and over-grazing for example). - Over exploitation is the over use of wildlife and plant species by people for food, clothing, pets, medicine, sport and many other purposes. People have always depended on wildlife and plants for food, clothing, medicine, shelter and many other needs. But today we are taking more than the natural world can supply. The danger is that if we take too many individuals of a species from their natural environment, the species may no longer be able to survive. The loss of one species can affect many other species in an ecosystem. The hunting, trapping, collecting and fishing of wildlife at unsustainable levels is not something new. The passenger pigeon was hunted to extinction, early in the last century, and over-hunting nearly caused the extinction of the American bison and several species of whales. Population: The increasing population of human beings is the most major threat to wildlife. More people on the globe means more consumption of food,water and fuel . Therefore,more waste is generated. Every major threat to wildlife as seen above, is directly related to increasing population of human beings. If the population is altered so is the amount of risk to wildlife. The less is the population, less is the disturbance to wildlife. Today, the [Endangered Species Act] protects some U.S. species that were in danger from over exploitation, and the Convention on International Trade in Endangered Species of Fauna and Flora (CITES) works to prevent the global trade of wildlife. But there are many species that are not protected from being illegally traded or over-harvested. Wildlife conservation as a government involvement In 1972, the Government of India enacted a law called the Wildlife Conservation Act. Soon after enactment, a trend emerged whereby policymakers enacted regulations on conservation. State and non-state actors began to follow a detailed "framework" to work toward successful conservation. The World Conservation Strategy was developed in 1980 by the "International Union for Conservation of Nature and Natural Resources" (IUCN) with advice, cooperation and financial assistance of the United Nations Environment Programme (UNEP) and the World Wildlife Fund and in collaboration with the Food and Agriculture Organization of the United Nations (FAO) and the United Nations Educational, Scientific and Cultural Organization (Unesco)" The strategy aims to "provide an intellectual framework and practical guidance for conservation actions." This thorough guidebook covers everything from the intended "users" of the strategy to its very priorities. It even includes a map section containing areas that have large seafood consumption and are therefore endangered by over fishing. The main sections are as follows: - The objectives of conservation and requirements for their achievement: - Maintenance of essential ecological processes and life-support systems. - Preservation of genetic diversity that is flora and fauna. - Sustainable utilization of species and ecosystems. - Priorities for national action: - A framework for national and sub-national conservation strategies. - Policy making and the integration of conservation and development. - Environmental planning and rational use allocation. - Priorities for international action: - International action: law and assistance. - Tropical forests and dry lands. - A global programme for the protection of genetic resource areas. - Tropical forests - Deserts and areas subject to desertification. As major development agencies became discouraged with the public sector of environmental conservation in the late 1980s, these agencies began to lean their support towards the “private sector” or non-government organizations (NGOs). In a World Bank Discussion Paper it is made apparent that “the explosive emergence of nongovernmental organizations” was widely known to government policy makers. Seeing this rise in NGO support, the U.S. Congress made amendments to the Foreign Assistance Act in 1979 and 1986 “earmarking U.S. Agency for International Development (USAID) funds for biodiversity”. From 1990 moving through recent years environmental conservation in the NGO sector has become increasingly more focused on the political and economic impact of USAID given towards the “Environment and Natural Resources”. After the terror attacks on the World Trade Centers on September 11, 2001 and the start of former President Bush’s War on Terror, maintaining and improving the quality of the environment and natural resources became a “priority” to “prevent international tensions” according to the Legislation on Foreign Relations Through 2002 and section 117 of the 1961 Foreign Assistance Act. Furthermore, in 2002 U.S. Congress modified the section on endangered species of the previously amended Foreign Assistance Act. Active non-government organizations Many NGOs exist to actively promote, or be involved with wildlife conservation: - The Nature Conservancy is a US charitable environmental organization that works to preserve the plants, animals, and natural communities that represent the diversity of life on Earth by protecting the lands and waters they need to survive. - World Wide Fund for Nature (WWF) is an international non-governmental organization working on issues regarding the conservation, research and restoration of the environment, formerly named the World Wildlife Fund, which remains its official name in Canada and the United States. It is the world's largest independent conservation organization with over 5 million supporters worldwide, working in more than 90 countries, supporting around 1300 conservation and environmental projects around the world. It is a charity, with approximately 60% of its funding coming from voluntary donations by private individuals. 45% of the fund's income comes from the Netherlands, the United Kingdom and the United States. - Wild-life Conservation Society - Audubon Society - Traffic (conservation programme) - Born Free Foundation - WildEarth Guardians - Wildlife farming - Conservation biology - Conservation movement - Wildlife management - Conservation of plants and animals - "Cooperative Alliance for Refuge Enhancement". CARE. Retrieved 1 June 2012. - "Wildlife Conservation". Conservation and Wildlife. Retrieved 1 June 2012. - "Conservation Funding - National Wildlife Federation". www.nwf.org. Retrieved 2016-01-21. - "Wildlife and the Farm Bill - National Wildlife Federation". www.nwf.org. Retrieved 2016-01-21. - Service, U.S. Fish and Wildlife. "Fish and Wildlife Service". www.fws.gov. Retrieved 2016-01-21. - McCallum, M.L. 2010. Future climate change spells catastrophe for Blanchard's Cricket Frog (Acris blanchardi). Acta Herpetologica 5:119 - 130. - McCallum, M.L., J.L. McCallum, and S.E. Trauth. 2009. Predicted climate change may spark box turtle declines. Amphibia-Reptilia 30:259 - 264. - McCallum, M.L. and G.W. Bury. 2013. Google search patterns suggest declining interest in the environment. Biodiversity and Conservation DOI: 10.1007/s10531-013-0476-6 - "World Conservation Strategy" (PDF). Retrieved 2011-05-01. - Meyer, Carrie A. (1993). "Environmental NGOs in Ecuador: An Economic Analysis of Institutional Change". The Journal of Developing Areas 27 (2): 191–210. - "The Foreign Assistance Act of 1961, as amended" (PDF). Retrieved 2011-05-01. - "About Us - Learn More About The Nature Conservancy". Nature.org. 2011-02-23. Retrieved 2011-05-01. - "WWF in Brief". World Wildlife Fund. Retrieved 2011-05-01.
https://en.wikipedia.org/wiki/Wildlife_conservation
4.25
Quantitative Methods - Confidence Intervals While a normally-distributed random variable can have many potential outcomes, the shape of its distribution gives us confidence that the vast majority of these outcomes will fall relatively close to its mean. In fact, we can quantify just how confident we are. By using confidence intervals - ranges that are a function of the properties of a normal bell-shaped curve - we can define ranges of probabilities. The diagram below has a number of percentages - these numbers (which are approximations and rounded off) indicate the probability that a random outcome will fall into that particular section below the curve. In other words, by assuming normal distribution, we are 68% confident that a variable will fall within one standard deviation. Within two standard deviation intervals, our confidence grows to 95%. Within three standard deviations, 99%. Take an example of a distribution of returns of a security with a mean of 10% and a standard deviation of 5%: - 68% of the returns will be between 5% and 15% (within 1 standard deviation, 10 + 5). - 95% of the returns will be between 0% and 20% (within 2 std. devs., 10 + 2*5). - 99% of the returns will be between -5% and 25% (within 3 std. devs., 10 + 3*5) Standard Normal Distribution Standard normal distribution is defined as a normal distribution where mean = 0 and standard deviation = 1. Probability numbers derived from the standard normal distribution are used to help standardize a random variable - i.e. express that number in terms of how many standard deviations it is away from its mean. Standardizing a random variable X is done by subtracting X from the mean value (μ), and then dividing the result by the standard deviation (σ). The result is a standard normal random variable which is denoted by the letter Z. Z = (X - μ)/σ Again, we'd start with standardizing random variable X, which in this case is 10%: If a distribution has a mean of 10 and standard deviation of 5, and a random observation X is -2, we would standardize our random variable with the equation for Z. Z = (X - μ)/ σ = (-2 - 10)/5 = -12/5 = -2.4 The standard normal random variable Z tells us how many standard deviations the observation is from the mean. In this case, -2 translates to 2.4 standard deviations away from 10. You are considering an investment portfolio with an expected return of 10% and a standard deviation of 8%. The portfolio's returns are normally distributed. What is the probability of earning a return less than 2%? Z = (X - μ)/ σ = (2 - 10)/8 = -8/8 = -1.0 Next, one would often consult a Z-table for cumulative probabilities for a standard normal distribution in order to determine the probability. In this case, for Z = -1, P(Z ≤ x) - 0.158655, or 16%. Therefore, there is a 16% probability of earning a return of less than 2%. Keep in mind that your upcoming exam will not provide Z-tables, so, how would you solve this problem on test day? The answer is that you need to remember that 68% of observations fall + 1 standard deviation on a normal curve, which means that 32% are not within one standard deviation. This question essentially asked for probability of more than one standard deviation below, or 32%/2 = 16%. Study the earlier diagram that shows specific percentages for certain standard deviation intervals on a normal curve - in particular, remember 68% for + one away, and remember 95% for + two away. Shortfall risk is essentially a refinement of the modern-day development of mean-variance analysis, that is, the idea that one must focus on both risk and return as opposed to simply the return. Risk is typically measured by standard deviation, which measures all deviations - i.e. both positive and negative. In other words, positive deviations are treated as if they were equal to negative deviations. In the real world, of course, negative surprises are far more important to quantify and predict with clarity if one is to accurately define risk. Two mutual funds could have the same risk if measured by standard deviation, but if one of those funds tends to have more extreme negative outcomes, while the other had a high standard deviation due to a preponderance of extreme positive surprises, then the actual risk profiles of those funds would be quite different. Shortfall risk defines a minimum acceptable level, and then focuses on whether a portfolio will fall below that level over a given time period. Roy's Safety-First Ratio An optimal portfolio is one that minimizes the probability that the portfolio's return will fall below a threshold level. In probability notation, if RP is the return on the portfolio, and RL is the threshold (the minimum acceptable return), then the portfolio for which P(RP < RL) is minimized will be the optimal portfolio according to Again, we'd start with standardizing random variable X, which in this case is 10%: SFRatio = (E(RP) - RL)/ σP Let's say our minimum threshold is -2%, and we have the following expectations for portfolios A and B: |Portfolio A||Portfolio B| |Expected Annual Return||8%||12%| The SFRatio for portfolio A is (8 - (-2))/10 = 1.0 The SFRatio for portfolio B is (12 - (-2))/16 = 0.875 In other words, the minimum threshold is one standard deviation away in Portfolio A, and just 0.875 away in Portfolio B, so by safety-first rules we opt for Portfolio A. A lognormal distribution has two distinct properties: it is always positive (bounded on the left by zero), and it is skewed to the right. Prices for stocks and many other financial assets (anything which by definition can never be negative) are often found to be lognormally distributed. Also, the lognormal and normal distributions are related: if a random variable X is lognormally distributed, then its natural log, ln(X) is normally distributed. (Thus the term "lognormal" - the log is normal.) Figure 2.11 below demonstrates a typical lognormal distribution. Career Education & ResourcesLearn about the difficulty of the CFA exams with a description of the tests, some statistics on pass rates and suggestions that can help you pass the exams. ProfessionalsA financial analyst researches companies and economic conditions to make business, sector and industry recommendations. Career Education & ResourcesRead about what it takes to become a financial analyst in a corporation or securities firm, and learn how far you can rise in the profession. Career Education & ResourcesLearn what education and certifications you need to become a financial planner, as well as the future prospects and earnings potential for financial planners. Career Education & ResourcesThe non-profit sector offers a stable selection of jobs for those who seek other types of fulfillment from their jobs than just purely financial. Career Education & ResourcesLearn about the basic requirements for getting hired as a portfolio manager, and discover how most professionals in the field rise into the position. Your PracticeThese four professional organizations are among the most respected and well known in the industry. ProfessionalsFind out what equity research analysts do on a day-to-day basis, and learn more about the typical career progression for these securities professionals. ProfessionalsThe Chartered Financial Analyst Level II exam is the second of three tests that CFA candidates must pass. ProfessionalsLearn more about the career options available to financial data analysts, and determine whether the profession is a good match for you. Professionals who help individuals manage their finances by providing ... Formerly known as the Association for Investment Management and ... A financial professional who studies various industries and companies, ... A professional designation given by the CFA Institute (formerly ... The differences between a Chartered Financial Analyst (CFA) and a Certified Financial Planner (CFP) are many, but comes down ... Read Full Answer >> According to the CFA Institute, a person who holds a CFA charter is not a chartered financial analyst. The CFA Institute ... Read Full Answer >> The types of positions that a Chartered Financial Analyst (CFA) is likely to hold include any position that deals with large ... Read Full Answer >> Prepaid expenses benefit both businesses and individuals. Prepaid expenses are the types of expenses that are bought or paid ... Read Full Answer >> If you are looking specifically for an investment banking position, an MBA may be marginally preferable over the CFA. The ... Read Full Answer >> You may still pass the Chartered Financial Analysis (CFA) Level I even if you fare poorly in the ethics section, but don't ... Read Full Answer >>
http://www.investopedia.com/exam-guide/cfa-level-1/quantitative-methods/confidence-intervals.asp
4.0625
As part of the global carbon cycle, underwater volcanoes emit between 66 to 97 million tonnes of CO2 per year. However, this is balanced by the carbon sink provided by newly formed ocean floor lava. (NOAA) The carbon cycle describes the exchange of carbon among Earth’s biosphere (life), atmosphere (air), hydrosphere (water), pedosphere (soil) and lithosphere (rocks, crust, and mantle). It is one of several biogeochemical cycles on Earth that play a key role in making life possible and in regulating many planetary systems. Exchanges between these spheres take many forms. Atmospheric carbon dioxide can readily dissolve into surface waters, and both atmospheric and carbon dioxide dissolved in the ocean are easily and frequently taken up living organisms. Transfer of carbon into the lithosphere takes much longer. Carbon in the lithosphere is also less mobile, often remaining stored there for millions of years, but large amounts can be released in an instant during a volcanic eruption. Human use of fossil fuels and other activities is also releasing an increasing amount stored in hydrocarbons back to the atmosphere as carbon dioxide. Some organisms—such as photosynthetic plants and microbes and chemosynthetic bacteria—are able to take inorganic carbon, primarily in the form of carbon dioxide, and combine it with water to form simple carbohydrates (sugars). These carbohydrates formed by photosynthesis or chemosynthesis serve as the basic building blocks of all organic (carbon-containing) molecules that are necessary for life. Carbon dioxide dissolved in water is likewise readily incorporated into the marine food chain and into the carbonate minerals that make up the shells or skeletons of many marine organisms.
https://www.whoi.edu/page.do?pid=83340
4.21875
Zachary Taylor, Wright Center, Teachers' Domain Learn more about Teaching Climate Literacy and Energy Awareness» See how this Static Visualization supports the Next Generation Science Standards» Middle School: 3 Cross Cutting Concepts High School: 2 Performance Expectations, 1 Disciplinary Core Idea, 4 Cross Cutting Concepts, 4 Science and Engineering Practices About Teaching Climate Literacy Other materials addressing 4e Notes From Our Reviewers The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness. Read what our review team had to say about this resource below or learn more about how CLEAN reviews teaching materials Teaching Tips | Science | Pedagogy | - The graphs in the analysis section can be used in other activities. The comparison of the methane, calcium, and insolation graphs to the temperature graph is particularly useful. - Educator may want to explain oxygen isotope ratios before doing this activity. About the Science - Comment from expert scientist: This exercise presents a nice summary of how and why ice cores are drilled and presents some of the results. It provides insights into how scientists understand past climate and shows data that puts current climate change in perspective. About the Pedagogy - A background essay and discussion questions are provided with the resource. - Resource has a nice set of graphs that can be overlaid with a temperature curve for analysis and comparison. - Students can use the data and overlays to draw and defend their own conclusions. - This resource engages students in using scientific data. See other data-rich activities Next Generation Science Standards See how this Static Visualization supports: Performance Expectations: 2 HS-ESS2-2: Analyze geoscience data to make the claim that one change to Earth's surface can create feedbacks that cause changes to other Earth systems. HS-ESS2-4: Use a model to describe how variations in the flow of energy into and out of Earth’s systems result in changes in climate. Disciplinary Core Ideas: 1 HS-ESS2.A3:The geological record shows that changes to global and regional climate can be caused by interactions among changes in the sun’s energy output or Earth’s orbit, tectonic events, ocean circulation, volcanic activity, glaciers, vegetation, and human activities. These changes can occur on a variety of time scales from sudden (e.g., volcanic ash clouds) to intermediate (ice ages) to very long-term tectonic cycles. Cross Cutting Concepts: 4 HS-C1.5:Empirical evidence is needed to identify patterns. HS-C2.1:Empirical evidence is required to differentiate between cause and correlation and make claims about specific causes and effects. HS-C2.2:Cause and effect relationships can be suggested and predicted for complex natural and human designed systems by examining what is known about smaller scale mechanisms within the system. HS-C2.4:Changes in systems may have various causes that may not have equal effects. Science and Engineering Practices: 4 HS-P3.4:Select appropriate tools to collect, record, analyze, and evaluate data. HS-P4.2:Apply concepts of statistics and probability (including determining function fits to data, slope, intercept, and correlation coefficient for linear fits) to scientific and engineering questions and problems, using digital tools when feasible. HS-P4.3:Consider limitations of data analysis (e.g., measurement error, sample selection) when analyzing and interpreting data HS-P4.4:Compare and contrast various types of data sets (e.g., self-generated, archival) to examine consistency of measurements and observations.
http://cleanet.org/resources/43452.html
4.0625
The Cretaceous–Paleogene (K–Pg) boundary,[a] formerly known as the Cretaceous–Tertiary (K–T) boundary,[b] is a geological signature, usually a thin band. It defines the end of the Mesozoic Era, and is usually estimated at around 66 Ma (million years ago), with more specific radioisotope dating yielding an age of 66.043 ± 0.011 Ma. K is the traditional abbreviation for the Cretaceous Period, and Pg is the abbreviation for the Paleogene Period. The boundary marks the end of the Cretaceous Period, the last period of the Mesozoic Era, and marks the beginning of the Paleogene Period of the Cenozoic Era. The boundary is associated with the Cretaceous–Paleogene extinction event, a mass extinction which is considered to be the demise of the non-avian dinosaurs in addition to a majority of the world's Mesozoic species. Alvarez impact hypothesis In 1980, a team of researchers consisting of Nobel prize-winning physicist Luis Alvarez, his son, geologist Walter Alvarez, and chemists Frank Asaro and Helen Michel discovered that sedimentary layers found all over the world at the K–T boundary contain a concentration of iridium many times greater than normal (30 times the average crustal content in Italy and 160 times at Stevns on the Danish island of Zealand). Iridium is extremely rare in the earth's crust because it is a siderophile element, and therefore most of it sank with iron into the earth's core during planetary differentiation. As iridium remains abundant in most asteroids and comets, the Alvarez team suggested that an asteroid struck the earth at the time of the K–T boundary. There were other earlier speculations on the possibility of an impact event, but no evidence had been uncovered at that time. The evidence for the Alvarez impact theory is supported by chondritic meteorites and asteroids which have an iridium concentration of ~455 parts per billion, much higher than ~0.3 parts per billion typical of the Earth's crust. Chromium isotopic anomalies found in Cretaceous–Paleogene boundary sediments are similar to those of an asteroid or a comet composed of carbonaceous chondrites. Shocked quartz granules and tektite glass spherules, indicative of an impact event, are also common in the K–T boundary, especially in deposits from around the Caribbean. All of these constituents are embedded in a layer of clay, which the Alvarez team interpreted as the debris spread all over the world by the impact. Using estimates of the total amount of iridium in the K–T layer, and assuming that the asteroid contained the normal percentage of iridium found in chondrites, the Alvarez team went on to calculate the size of the asteroid. The answer was about 10 km (6.2 mi) in diameter, about the size of Manhattan. Such a large impact would have had approximately the energy of 100 trillion tons of TNT, or about 2 million times greater than the most powerful thermonuclear bomb ever tested. One of the consequences of such an impact is a dust cloud which would block sunlight and inhibit photosynthesis for a few years. This would account for the extinction of plants and phytoplankton and of organisms dependent on them (including predatory animals as well as herbivores). However, small creatures whose food chains were based on detritus might have still had a reasonable chance of survival. Vast amounts of sulfuric acid aerosols were ejected into the stratosphere as a result of the impact, leading to a 10–20% reduction in sunlight reaching the Earth's surface. It would have taken at least ten years for those aerosols to dissipate. Global firestorms may have resulted as incendiary fragments from the blast fell back to Earth. Analyses of fluid inclusions in ancient amber suggest that the oxygen content of the atmosphere was very high (30–35%) during the late Cretaceous. This high O 2 level would have supported intense combustion. The level of atmospheric O 2 plummeted in the early Paleogene Period. If widespread fires occurred, they would have increased the CO 2 content of the atmosphere and caused a temporary greenhouse effect once the dust cloud settled, and this would have exterminated the most vulnerable survivors of the "long winter". The impact may also have produced acid rain, depending on what type of rock the asteroid struck. However, recent research suggests this effect was relatively minor. Chemical buffers would have limited the changes, and the survival of animals vulnerable to acid rain effects (such as frogs) indicates that this was not a major contributor to extinction. Impact theories can only explain very rapid extinctions, since the dust clouds and possible sulphuric aerosols would wash out of the atmosphere in a fairly short time—possibly under ten years. When it was originally proposed, one issue with the "Alvarez hypothesis" (as it came to be known) had been that no documented crater matched the event. This was not a lethal blow to the theory; while the crater resulting from the impact would have been larger than 250 km (160 mi) in diameter, Earth's geological processes hide or destroy craters over time. Subsequent research, however, identified the Chicxulub Crater buried under Chicxulub on the coast of Yucatan, Mexico as the impact crater which matched the Alvarez hypothesis dating. Identified in 1990 based on the work of Glen Penfield done in 1978, this crater is oval, with an average diameter of about 180 km (110 mi), about the size calculated by the Alvarez team. Gerta Keller, however, suggests that the Chicxulub impact occurred approximately 300,000 years before the K–T boundary. This dating is based on evidence collected in Northeast Mexico, including stratigraphic layers bearing impact spherules, the earliest of which is approximately 10 m (33 ft) below the K–T boundary. According to Keller's interpretation, the interval between the oldest spherule layer and the K-T boundary represents about 300,000 years of long-term sedimentation. However, Schulte and other 40 co-authors reject that the spherule is slumped from the upper spherule layer that lies on the K-T boundary. Also, Keller's conclusion is unsupported by radioisotope dating and deep-sea cores. The shape and location of the crater indicate further causes of devastation in addition to the dust cloud. The asteroid landed right on the coast and would have caused gigantic tsunamis, for which evidence has been found all around the coast of the Caribbean and eastern United States—marine sand in locations which were then inland, and vegetation debris and terrestrial rocks in marine sediments dated to the time of the impact. The asteroid landed in a bed of anhydrite (CaSO 4) or gypsum (CaSO4·2(H2O)), which would have ejected large quantities of sulfur trioxide SO 3 that combined with water to produce a sulfuric acid aerosol. This would have further reduced the sunlight reaching the Earth's surface and then over several days, precipitated planet-wide as acid rain, killing vegetation, plankton and organisms which build shells from calcium carbonate (coccolithophorids and molluscs). Before 2000, arguments that the Deccan Traps flood basalts caused the extinction were usually linked to the view that the extinction was gradual, as the flood basalt events were thought to have started around 68 Ma and lasted for over 2 million years. However, there is evidence that two-thirds of the Deccan Traps were created within 1 million years about 65.5 Ma, so these eruptions would have caused a fairly rapid extinction, possibly a period of thousands of years, but still a longer period than what would be expected from a single impact event. The Deccan Traps could have caused extinction through several mechanisms, including the release of dust and sulphuric aerosols into the air which might have blocked sunlight and thereby reduced photosynthesis in plants. In addition, Deccan Trap volcanism might have resulted in carbon dioxide emissions which would have increased the greenhouse effect when the dust and aerosols cleared from the atmosphere. In the years when the Deccan Traps theory was linked to a slower extinction, Luis Alvarez (who died in 1988) replied that paleontologists were being misled by sparse data. While his assertion was not initially well-received, later intensive field studies of fossil beds lent weight to his claim. Eventually, most paleontologists began to accept the idea that the mass extinctions at the end of the Cretaceous were largely or at least partly due to a massive Earth impact. However, even Walter Alvarez has acknowledged that there were other major changes on Earth even before the impact, such as a drop in sea level and massive volcanic eruptions that produced the Indian Deccan Traps, and these may have contributed to the extinctions. Multiple impact event Several other craters also appear to have been formed about the time of the K–T boundary. This suggests the possibility of nearly simultaneous multiple impacts, perhaps from a fragmented asteroidal object, similar to the Shoemaker-Levy 9 cometary impact with Jupiter. Among these are the Boltysh crater, a 24-km (15-mi) diameter impact crater in Ukraine (65.17 ± 0.64 Ma); and the Silverpit crater, a 20-km (12-mi) diameter impact crater in the North Sea (60–65 Ma). Any other craters that might have formed in the Tethys Ocean would have been obscured by erosion and tectonic events such as the relentless northward drift of Africa and India. A very large structure in the sea floor off the west coast of India has recently been interpreted as a crater by some researchers. The potential Shiva crater, 450–600 km (280–370 mi) in diameter, would substantially exceed Chicxulub in size and has also been dated at about 66 mya, an age consistent with the K–T boundary. An impact at this site could have been the triggering event for the nearby Deccan Traps. However, this feature has not yet been accepted by the geologic community as an impact crater and may just be a sinkhole depression caused by salt withdrawal. Maastrichtian marine regression Clear evidence exists that sea levels fell in the final stage of the Cretaceous by more than at any other time in the Mesozoic era. In some Maastrichtian stage rock layers from various parts of the world, the later ones are terrestrial; earlier ones represent shorelines and the earliest represent seabeds. These layers do not show the tilting and distortion associated with mountain building; therefore, the likeliest explanation is a regression, that is, a buildout of sediment, but not necessarily a drop in sea level. No direct evidence exists for the cause of the regression, but the explanation which is currently accepted as the most likely is that the mid-ocean ridges became less active and therefore sank under their own weight as sediment from uplifted orogenic belts filled in structural basins. A severe regression would have greatly reduced the continental shelf area, which is the most species-rich part of the sea, and therefore could have been enough to cause a marine mass extinction. However, research concludes that this change would have been insufficient to cause the observed level of ammonite extinction. The regression would also have caused climate changes, partly by disrupting winds and ocean currents and partly by reducing the Earth's albedo and therefore increasing global temperatures. Marine regression also resulted in the reduction in area of epeiric seas, such as the Western Interior Seaway of North America. The reduction of these seas greatly altered habitats, removing coastal plains that ten million years before had been host to diverse communities such as are found in rocks of the Dinosaur Park Formation. Another consequence was an expansion of freshwater environments, since continental runoff now had longer distances to travel before reaching oceans. While this change was favorable to freshwater vertebrates, those that prefer marine environments, such as sharks, suffered. Another discredited cause for the K–T extinction event is cosmic radiation from a nearby supernova explosion. An iridium anomaly at the boundary could support this hypothesis. The fallout from a supernova explosion should contain 244 Pu, the longest-lived plutonium isotope with a half-life of 81 million years. If the supernova hypothesis were correct, traces of 244 Pu should be detected in rocks deposited at the time. However, analysis of the boundary layer sediments failed to find 244 It is possible that more than one of these hypotheses may be a partial solution to the mystery, and that more than one of these events may have occurred. The location of the Deccan Traps, for example, would have been close to the antipodal point of Chicxulub in the late Cretaceous; a sufficiently large asteroid impact might have sent shock waves around the planet sufficient to trigger an effect on weakened crust on the other side of the globe. References and notes - The abbreviation is derived from the juxtaposition of K, the common abbreviation for the Cretaceous, which in turn originates from the correspondent German term Kreide, and Pg, which is the abbreviation for the Paleogene. - This former designation has as a part of it a term, 'Tertiary' (abbreviated as T), that is now discouraged as a formal geochronological unit by the International Commission on Stratigraphy. - Ogg, James G.; Gradstein, F. M; Gradstein, Felix M. (2004). A geologic time scale 2004. Cambridge, UK: Cambridge University Press. ISBN 0-521-78142-6. - "International Chronostratigraphic Chart" (pdf). International Commission on Stratigraphy. 2012. Retrieved 2013-12-18. - Renne et al., (2013). "Time Scales of Critical Events Around the Cretaceous-Paleogene Boundary". Science. doi:10.1126/science.1230492. - Fortey, R (1999). Life: A Natural History of the First Four Billion Years of Life on Earth. Vintage. pp. 238–260. ISBN 978-0-375-70261-7. - Alvarez, LW, Alvarez, W, Asaro, F, and Michel, HV (1980). "Extraterrestrial cause for the Cretaceous–Tertiary extinction". Science 208 (4448): 1095–1108. Bibcode:1980Sci...208.1095A. doi:10.1126/science.208.4448.1095. PMID 17783054. - De Laubenfels, MW (1956). "Dinosaur Extinctions: One More Hypothesis". Journal of Paleontology 30 (1): 207–218. Retrieved 2007-05-22. - W. F. McDonough and S.-s. Sun (1995). "The composition of the Earth". Chemical Geology 120 (3–4): 223–253. doi:10.1016/0009-2541(94)00140-4. - Pope, KO, Baines, KH, Ocampo, AC, & Ivanov, BA (1997). "Energy, volatile production, and climatic effects of the Chicxulub Cretaceous/Tertiary impact". Journal of Geophysical Research 102 (E9): 21645–21664. Bibcode:1997JGR...10221645P. doi:10.1029/97JE01743. PMID 11541145. Retrieved 2007-07-18. - Ocampo, A, Vajda, V & Buffetaut, E (2006). Unravelling the Cretaceous–Paleogene (KT) Turnover, Evidence from Flora, Fauna and Geology in Biological Processes Associated with Impact Events (Cockell, C, Gilmour, I & Koeberl, C, editors). SpringerLink. pp. 197–219. ISBN 978-3-540-25735-6. Retrieved 2007-06-17. - Kring, DA (2003). "Environmental consequences of impact cratering events as a function of ambient conditions on Earth". Astrobiology 3 (1): 133–152. Bibcode:2003AsBio...3..133K. doi:10.1089/153110703321632471. PMID 12809133. - Keller, G, Adatte, T, Stinnesbeck, W, Rebolledo-Vieyra, Fucugauchi, JU, Kramar,U, & Stüben, D (2004). "Chicxulub impact predates the K-T boundary mass extinction". PNAS 101 (11): 3753–3758. Bibcode:2004PNAS..101.3753K. doi:10.1073/pnas.0400396101. PMC 374316. PMID 15004276. - Pope KO, Ocampo AC, Kinsland GL, Smith R (1996). "Surface expression of the Chicxulub crater". Geology 24 (6): 527–30. Bibcode:1996Geo....24..527P. doi:10.1130/0091-7613(1996)024<0527:SEOTCC>2.3.CO;2. PMID 11539331. - Keller, Gerta; Adatte, Thierry; Stinnesbeck, Wolfgang (2002). "Multiple spherule layers in the late Maastrichtian of northeastern Mexico". Geological Society of America Special Paper 356. - Schulte P, Alegret L, Arenillas I, Arz J A, Barton P J, et al. (2010). "The Chicxulub Asteroid Impact and Mass Extinction at the Cretaceous–Paleogene Boundary". Science 327 (5970), 1214–1218. - Dinosaur-Killing Asteroid Triggered Lethal Acid Rain, Livescience, March 09, 2014 - Hofman, C, Féraud, G & Courtillot, V (2000). "40Ar/39Ar dating of mineral separates and whole rocks from the Western Ghats lava pile: further constraints on duration and age of the Deccan traps". Earth and Planetary Science Letters 180: 13–27. Bibcode:2000E&PSL.180...13H. doi:10.1016/S0012-821X(00)00159-X. - Duncan, RA & Pyle, DG (1988). "Rapid eruption of the Deccan flood basalts at the Cretaceous/Tertiary boundary". Nature 333 (6176): 841–843. Bibcode:1988Natur.333..841D. doi:10.1038/333841a0. - Alvarez, W (1997). T. rex and the Crater of Doom. Princeton University Press. pp. 130–146. ISBN 978-0-691-01630-6. - Mullen, L (October 13, 2004). "Debating the Dinosaur Extinction". Astrobiology Magazine. Retrieved 2007-07-11. - Mullen, L (October 20, 2004). "Multiple impacts". Astrobiology Magazine. Retrieved 2007-07-11. - Mullen, L (November 3, 2004). "Shiva: Another K–T impact?". Astrobiology Magazine. Retrieved 2007-07-11. - Chatterjee, S, Guven, N, Yoshinobu, A, & Donofrio, R (2006). "Shiva structure: a possible KT boundary impact crater on the western shelf of India" (PDF). Special Publications of the Museum of Texas Tech University (50). Retrieved 2007-06-15. - Chatterjee, S, Guven, N, Yoshinobu, A, & Donofrio, R (2003). "The Shiva Crater: Implications for Deccan Volcanism, India-Seychelles rifting, dinosaur extinction, and petroleum entrapment at the KT Boundary". Geological Society of America Abstracts with Programs 35 (6): 168. Retrieved 2007-08-02. - MacLeod, N, Rawson, PF, Forey, PL, Banner, FT, Boudagher-Fadel, MK, Bown, PR, Burnett, JA, Chambers, P, Culver, S, Evans, SE, Jeffery, C, Kaminski, MA, Lord, AR, Milner, AC, Milner, AR, Morris, N, Owen, E, Rosen, BR, Smith, AB, Taylor, PD, Urquhart, E & Young, JR (1997). "The Cretaceous–Tertiary biotic transition". Journal of the Geological Society 154 (2): 265–292. doi:10.1144/gsjgs.154.2.0265. - Liangquan, L; Keller, G (1998). "Abrupt deep-sea warming at the end of the Cretaceous". Geology 26 (11): 995–998. Bibcode:1998Geo....26..995L. doi:10.1130/0091-7613(1998)026<0995:ADSWAT>2.3.CO;2. Retrieved 2007-08-01. - Marshall, C. R. & Ward, PD (1996). "Sudden and Gradual Molluscan Extinctions in the Latest Cretaceous of Western European Tethys". Science 274 (5291): 1360–1363. Bibcode:1996Sci...274.1360M. doi:10.1126/science.274.5291.1360. PMID 8910273. - Archibald, J. David; Fastovsky, David E. (2004). "Dinosaur Extinction". In Weishampel, David B.; Dodson, Peter; and Osmólska, Halszka (eds.). The Dinosauria (2nd ed.). Berkeley: University of California Press. pp. 672–684. ISBN 0-520-24209-2. - Ellis, J & Schramm, DN (1995). "Could a Nearby Supernova Explosion have Caused a Mass Extinction?". Proceedings of the National Academy of Sciences 92 (1): 235–238. Bibcode:1995PNAS...92..235E. doi:10.1073/pnas.92.1.235. PMC 42852. PMID 11607506.
https://en.wikipedia.org/wiki/Cretaceous%E2%80%93Paleogene_boundary
4.03125
Ellyn Satter's Division of Responsibility in Feeding Children have natural ability with eating. They eat as much as they need, they grow in the way that is right for them, and they learn to eat the food their parents eat. Step-by-step, throughout their growing-up years, they build on their natural ability and become eating competent. Parents let them learn and grow with eating when they follow the Division of Responsibility in Feeding. The Division of Responsibility for infants: - The parent is responsible for what. - The child is responsible for how much (and everything else). Parents choose breast- or formula-feeding, and help the infant be calm and organized. Then they feed smoothly, paying attention to information coming from the baby about timing, tempo, frequency, and amounts. The Division of Responsibility for babies making the transition to family food: - The parent is still responsible for what,and is becoming responsible for when and where the child is fed. - The child is still and always responsible for how much and whether to eat the foods offered by the parent. Based on what the child can do, not on how old s/he is, parents guide the child’s transition from nipple feeding through semi-solids, then thick-and-lumpy food, to finger food at family meals. The Division of Responsibility for toddlers through adolescents - The parent is responsible for what, when, where. - The child is responsible for how much and whether. Fundamental to parents’ jobs is trusting children to determine how much and whether to eat from what parents provide. When parents do their jobs with feeding, children do their jobs with eating: Parents’ feeding jobs: - Choose and prepare the food. - Provide regular meals and snacks. - Make eating times pleasant. - Step-by-step, show children by example how to behave at family mealtime. - Be considerate of children’s lack of food experience without catering to likes and dislikes. - Not let children have food or beverages (except for water) between meal and snack times. - Let children grow up to get bodies that are right for them. Children’s eating jobs: - Children will eat. - They will eat the amount they need. - They will learn to eat the food their parents eat. - They will grow predictably. - They will learn to behave well at mealtime. For more about raising healthy children who are a joy to feed, read Part two, "How to raise good eaters," in Ellyn Satter’s Secrets of Feeding a Healthy Family. For the evidence, read The Satter Feeding Dynamics Model. ©2016 by Ellyn Satter published at www.EllynSatterInstitute.org. You may reproduce this article if you don't charge for it or change it in any way and if you do include the for more about and copyright statements.
http://ellynsatterinstitute.org/dor/divisionofresponsibilityinfeeding.php
4.125
(Linnaeus, 1758) The wheat weevil (Sitophilus granarius) , also known as the grain weevil or granary weevil, occurs all over the world and is a common pest in many places. It can cause significant damage to harvested stored grains and may drastically decrease yields. The females lay many eggs and the larvae eat the inside of the grain kernels. Adult wheat weevils are about 3–5 mm (0.12–0.20 in) long with elongated snouts and chewing mouthparts. Depending on the grain kernels, the sizes vary. In small grains, such as millet or grain sorghum, they are small in size, but are larger in maize (corn). The adults are a reddish-brown colour and lack distinguishing marks. Adult wheat weevils are not capable of flight. Larvae are legless, humpbacked, and white with a tan head. Weevils in the pupal stage have snouts like the adults. Female wheat weevils lay between 36 and 254 eggs and usually one egg is deposited in each grain kernel. All larval stages and the pupal stage occur within the grain. The larvae feed inside the grain until pupation, after which they bore a hole out of the grain and emerge. They are rarely seen outside of the grain kernel. The lifecycle takes about 5 weeks in the summer, but may take up to 20 weeks in cooler temperatures. Adults can live up to 8 months after emerging. Adult wheat weevils when threatened or disturbed will pull their legs close to their bodies and feign death. Female weevils can tell if a grain kernel has had an egg laid in it by another weevil. They avoid laying another egg in this grain. Females chew a hole, deposit an egg, and seal the hole with a gelatinous secretion. This may be how other females know the grain has an egg in it already. This ensures the young will survive and produce another generation. One pair of weevils may produce up to 6,000 offspring per year. Wheat weevils are a pest of wheat, oats, rye, barley, rice and corn. Wheat weevils cause an unknown amount of damage worldwide because keeping track of so much information is difficult, especially in places where the grain harvests are not measured. They are hard to detect and usually all of the grain in an infested storage facility must be destroyed. Many methods attempt to get rid of the wheat weevil, such as pesticides, different methods of masking the odour of the grain with unpleasant scents, and introducing other organisms that are predators of the weevils. Prevention and control Sanitation and inspection are the keys to prevent the infestation. Grains should be stored in preferably metallic (cardboard, even fortified, is easily drilled through) containers with tight lids in a refrigerator or a freezer, and should be purchased in small quantities. If any suspicion has arisen, carefully examine the grains for adult insects or holes in the grain kernels. Another method is to immerse them in water. If they float to the surface, it is a good indication of infestation. Even if identified early, disposal may be the only effective solution. Deltamethrin powder (WP) is another solution to weevil infestation in grains. - Rice weevil (Sitophilus oryzae) - Maize weevil (Sitophilus zeamais) - Lixus concavus, the rhubarb curculio weevil - "Sitophilus granarius (Linnaeus, 1758)". Integrated Taxonomic Information System. Retrieved September 6, 2012. - "Granary and Rice Weevils" (PDF). Retrieved 2009-01-21. - "Store Products Pests: Granary Weevil" (PDF). Retrieved 2009-01-21. - Woodbury, N. 2008. Infanticide Avoidance by the Granary Weevil, Sitophilus granarius (L.) (Coleoptera: Curculionidae): The Role of Harbourage Markers, Oviposition Markers, and Egg-Plugs. Journal of Insect Behavior, 21: 55-62. - Giacinto, G. S., Antonio, D. C., & Giuseppe, R. 2008. Behavioral responses of adult Sitophilus granarius to individual cereal volatiles. Journal of Chemical Ecology, 34: 523-529. |Wikispecies has information related to: Sitophilus granarius| |Wikimedia Commons has media related to Sitophilus granarius.|
https://en.wikipedia.org/wiki/Wheat_weevil
4.15625
|This article needs additional citations for verification. (July 2010)| The Northern Renaissance was the Renaissance that occurred in Europe north of the Alps. Before 1497, Italian Renaissance humanism had little influence outside Italy. From the late 15th century, its ideas spread around Europe. This influenced the German Renaissance, French Renaissance, English Renaissance, Renaissance in the Low Countries, Polish Renaissance and other national and localized movements, each with different characteristics and strengths. In France, King Francis I imported Italian art, commissioned Venetian artists (including Leonardo da Vinci), and built grand palaces at great expense, starting the French Renaissance. Trade and commerce in cities like Bruges in the 15th century and Antwerp in the 16th increased cultural exchange between Italy and the Low Countries, however in art, and especially architecture, late Gothic influences remained present until the arrival of Baroque even as painters increasingly drew on Italian models. Universities and the printed book helped spread the spirit of the age through France, the Low Countries and the Holy Roman Empire, and then to Scandinavia and finally Britain by the late 16th century. Writers and humanists such as Rabelais, Pierre de Ronsard and Desiderius Erasmus were greatly influenced by the Italian Renaissance model and were part of the same intellectual movement. During the English Renaissance (which overlapped with the Elizabethan era) writers such as William Shakespeare and Christopher Marlowe composed works of lasting influence. The Renaissance was brought to Poland directly from Italy by artists from Florence and the Low Countries, starting the Polish Renaissance. In some areas the Northern Renaissance was distinct from the Italian Renaissance in its centralization of political power. While Italy and Germany were dominated by independent city-states, most of Europe began emerging as nation-states or even unions of countries. The Northern Renaissance was also closely linked to the Protestant Reformation with the resulting long series of internal and external conflicts between various Protestant groups and the Roman Catholic Church having lasting effects. Feudalism had dominated Europe for a thousand years, but was on the decline at the beginning of the Renaissance. The reasons for this decline include the post-plague environment, the increasing use of money rather than land as a medium of exchange, the growing number of serfs living as freemen, the formation of nation-states with monarchies interested in reducing the power of feudal lords, the increasing uselessness of feudal armies in the face of new military technology (such as gunpowder), and a general increase in agricultural productivity due to improving farming technology and methods. As in Italy, the decline of feudalism opened the way for the cultural, social, and economic changes associated with the Renaissance in Europe. Finally, the Renaissance in Europe would also be kindled by a weakening of the Roman Catholic Church. The slow demise of feudalism also weakened a long-established policy in which church officials helped keep the population of the manor under control in return for tribute. Consequently, the early 15th century saw the rise of many secular institutions and beliefs. Among the most significant of these, humanism, would lay the philosophical grounds for much of Renaissance art, music, and science. Desiderius Erasmus, for example, was important in spreading humanist ideas in the north, and was a central figure at the intersection of classical humanism and mounting religious questions. Forms of artistic expression which a century ago would have been banned by the church were now tolerated or even encouraged in certain circles. The velocity of transmission of the Renaissance throughout Europe can also be ascribed to the invention of the printing press. Its power to disseminate knowledge enhanced scientific research, spread political ideas and generally impacted the course of the Renaissance in northern Europe. As in Italy, the printing press increased the availability of books written in both vernacular languages and the publication of new and ancient classical texts in Greek and Latin. Furthermore, the Bible became widely available in translation, a factor often attributed to the spread of the Protestant Reformation. Age of Discovery One of the most important technological development of the Renaissance was the invention of the caravel. This combination of European and African ship building technologies for the first time made extensive trade and travel over the Atlantic feasible. While first introduced by the Italian states, and the early captains, such as Giovanni Caboto, who were Italian, the development would end Northern Italy's role as the trade crossroads of Europe, shifting wealth and power westwards to Spain, Portugal, France, England, and the Netherlands. These states all began to conduct extensive trade with Africa and Asia, and in the Americas began extensive colonisation activities. This period of exploration and expansion has become known as the Age of Discovery. Eventually European power spread around the globe. The detailed realism of Early Netherlandish painting was greatly respected in Italy, but there was little reciprocal influence on the North until nearly the end of the 15th century. Despite frequent cultural and artistic exchange, the Antwerp Mannerists (1500–1530)—chronologically overlapping with but unrelated to Italian Mannerism—were among the first artists in the Low Countries to clearly reflect Italian formal developments. Around the same time, Albrecht Dürer made his two trips to Italy, where he was greatly admired for his prints. Dürer, in turn, was influenced by the art he saw there. Other notable painters, such as Hans Holbein the Elder and Jean Fouquet, retained a Gothic influence that was still popular in the north, while highly individualistic artists such as Hieronymus Bosch and Pieter Bruegel the Elder developed styles that were imitated by many subsequent generations. Northern painters in the 16th century increasingly looked and travelled to Rome, becoming known as the Romanists. The High Renaissance art of Michelangelo and Raphael and the late Renaissance stylistic tendencies of Mannerism that were in vogue had a great impact on their work. Renaissance humanism and the large number of surviving classical artworks and monuments encouraged many Italian painters to explore Greco-Roman themes more prominently than northern artists, and likewise the famous 15th-century German and Dutch paintings tend to be religious. In the 16th century, mythological and other themes from history became more uniform amongst northern and Italian artists. Northern Renaissance painters, however, had new subject matter, such as landscape and genre painting. As Renaissance art styles moved through northern Europe, they changed and were adapted to local customs. In England and the northern Netherlands the Reformation brought religious painting almost completely to an end. Despite several very talented Artists of the Tudor Court in England, portrait painting was slow to spread from the elite. In France the School of Fontainebleau was begun by Italians such as Rosso Fiorentino in the latest Mannerist style, but succeeded in establishing a durable national style. By the end of the 16th century, artists such as Karel van Mander and Hendrik Goltzius collected in Haarlem in a brief but intense phase of Northern Mannerism that also spread to Flanders. The Renaissance is one of the most interesting and disputed periods of European history. Many scholars see it as a unique time with characteristics all its own. A second group views the Renaissance as the first two to three centuries of a larger era in European history usually called early modern Europe, which began in the late fifteenth century and ended on the eve of the French Revolution (1789) or with the close of the Napoleonic era (1815). Some social historians reject the concept of the Renaissance altogether. Historians also argue over how much the Renaissance differed from the Middle Ages and whether it was the beginning of the modern world, however defined. - Janson, H.W.; Anthony F. Janson (1997). History of Art (5th, rev. ed.). New York: Harry N. Abrams, Inc. ISBN 0-8109-3442-6. - Although the notion of a north to south-only direction of influence arose in the scholarship of Max Jakob Friedländer and was continued by Erwin Panofsky, art historians are increasingly questioning its validity: Lisa Deam, "Flemish versus Netherlandish: A Discourse of Nationalism," in Renaissance Quarterly, vol. 51, no. 1 (Spring, 1998), pp. 28–29. - Chipps Smith, Jeffrey (2004). The Northern Renaissance. Phaidon Press. ISBN 978-0-7148-3867-0. - Campbell, Gordon, ed. (2009). The Grove Encyclopedia of Northern Renaissance Art. Oxford University Press. ISBN 978-0-19-533466-1. - O'Neill, J, ed. (1987). The Renaissance in the North. New York: The Metropolitan Museum of Art.
https://en.wikipedia.org/wiki/Northern_Renaissance
4.125
Global warming, the phenomenon of increasing average air temperatures near the surface of Earth over the past one to two centuries. Climate scientists have since the mid-20th century gathered detailed observations of various weather phenomena (such as temperatures, precipitation, and storms) and of related influences on climate (such as ocean currents and the atmosphere’s chemical composition). These data indicate that Earth’s climate has changed over almost every conceivable timescale since the beginning of geologic time and that the influence of human activities since at least the beginning of the Industrial Revolution has been deeply woven into the very fabric of climate change. Giving voice to a growing conviction of most of the scientific community, the Intergovernmental Panel on Climate Change (IPCC) was formed in 1988 by the World Meteorological Organization (WMO) and the United Nations Environment Program (UNEP). In 2013 the IPCC reported that the interval between 1880 and 2012 saw an increase in global average surface temperature of approximately 0.9 °C (1.5 °F). The increase is closer to 1.1 °C (2.0 °F) when measured relative to the preindustrial (i.e., 1750–1800) mean temperature. The IPCC stated that most of the warming observed over the second half of the 20th century could be attributed to human activities. It predicted that by the end of the 21st century the global mean surface temperature would increase by 0.3 to 4.8 °C (0.5 to 8.6 °F) relative to the 1986–2005 average. The predicted rise in temperature was based on a range of possible scenarios that accounted for future greenhouse gas emissions and mitigation (severity reduction) measures and on uncertainties in the model projections. Some of the main uncertainties include the precise role of feedback processes and the impacts of industrial pollutants known as aerosols which may offset some warming. Many climate scientists agree that significant societal, economic, and ecological damage would result if global average temperatures rose by more than 2 °C (3.6 °F) in such a short time. Such damage would include increased extinction of many plant and animal species, shifts in patterns of agriculture, and rising sea levels. The IPCC reported that the global average sea level rose by some 19–21 cm (7.5–8.3 inches) between 1901 and 2010 and that sea levels rose faster in the second half of the 20th century than in the first half. It also predicted, again depending on a wide range of scenarios, that by the end of the 21st century the global average sea level could rise by another 26–82 cm (10.2–32.3 inches) relative to the 1986–2005 average and that a rise of well over 1 metre (3 feet) could not be ruled out. The scenarios referred to above depend mainly on future concentrations of certain trace gases, called greenhouse gases, that have been injected into the lower atmosphere in increasing amounts through the burning of fossil fuels for industry, transportation, and residential uses. Modern global warming is the result of an increase in magnitude of the so-called greenhouse effect, a warming of Earth’s surface and lower atmosphere caused by the presence of water vapour, carbon dioxide, methane, nitrous oxides, and other greenhouse gases. In 2014 the IPCC reported that concentrations of carbon dioxide, methane, and nitrous oxides in the atmosphere surpassed those found in ice cores dating back 800,000 years. Of all these gases, carbon dioxide is the most important, both for its role in the greenhouse effect and for its role in the human economy. It has been estimated that, at the beginning of the industrial age in the mid-18th century, carbon dioxide concentrations in the atmosphere were roughly 280 parts per million (ppm). By the middle of 2014, carbon dioxide concentrations had briefly reached 400 ppm, and, if fossil fuels continue to be burned at current rates, they are projected to reach 560 ppm by the mid-21st century—essentially, a doubling of carbon dioxide concentrations in 300 years. A vigorous debate is in progress over the extent and seriousness of rising surface temperatures, the effects of past and future warming on human life, and the need for action to reduce future warming and deal with its consequences. This article provides an overview of the scientific background and public policy debate related to the subject of global warming. It considers the causes of rising near-surface air temperatures, the influencing factors, the process of climate research and forecasting, the possible ecological and social impacts of rising temperatures, and the public policy developments since the mid-20th century. For a detailed description of Earth’s climate, its processes, and the responses of living things to its changing nature, see climate. For additional background on how Earth’s climate has changed throughout geologic time, see climatic variation and change. For a full description of Earth’s gaseous envelope, within which climate change and global warming occur, see atmosphere. Climatic variation since the last glaciation Global warming is related to the more general phenomenon of climate change, which refers to changes in the totality of attributes that define climate. In addition to changes in air temperature, climate change involves changes to precipitation patterns, winds, ocean currents, and other measures of Earth’s climate. Normally, climate change can be viewed as the combination of various natural forces occurring over diverse timescales. Since the advent of human civilization, climate change has involved an “anthropogenic,” or exclusively human-caused, element, and this anthropogenic element has become more important in the industrial period of the past two centuries. The term global warming is used specifically to refer to any warming of near-surface air during the past two centuries that can be traced to anthropogenic causes. To define the concepts of global warming and climate change properly, it is first necessary to recognize that the climate of Earth has varied across many timescales, ranging from an individual human life span to billions of years. This variable climate history is typically classified in terms of “regimes” or “epochs.” For instance, the Pleistocene glacial epoch (about 2,600,000 to 11,700 years ago) was marked by substantial variations in the global extent of glaciers and ice sheets. These variations took place on timescales of tens to hundreds of millennia and were driven by changes in the distribution of solar radiation across Earth’s surface. The distribution of solar radiation is known as the insolation pattern, and it is strongly affected by the geometry of Earth’s orbit around the Sun and by the orientation, or tilt, of Earth’s axis relative to the direct rays of the Sun. Worldwide, the most recent glacial period, or ice age, culminated about 21,000 years ago in what is often called the Last Glacial Maximum. During this time, continental ice sheets extended well into the middle latitude regions of Europe and North America, reaching as far south as present-day London and New York City. Global annual mean temperature appears to have been about 4–5 °C (7–9 °F) colder than in the mid-20th century. It is important to remember that these figures are a global average. In fact, during the height of this last ice age, Earth’s climate was characterized by greater cooling at higher latitudes (that is, toward the poles) and relatively little cooling over large parts of the tropical oceans (near the Equator). This glacial interval terminated abruptly about 11,700 years ago and was followed by the subsequent relatively ice-free period known as the Holocene Epoch. The modern period of Earth’s history is conventionally defined as residing within the Holocene. However, some scientists have argued that the Holocene Epoch terminated in the relatively recent past and that Earth currently resides in a climatic interval that could justly be called the Anthropocene Epoch—that is, a period during which humans have exerted a dominant influence over climate. Though less dramatic than the climate changes that occurred during the Pleistocene Epoch, significant variations in global climate have nonetheless taken place over the course of the Holocene. During the early Holocene, roughly 9,000 years ago, atmospheric circulation and precipitation patterns appear to have been substantially different from those of today. For example, there is evidence for relatively wet conditions in what is now the Sahara Desert. The change from one climatic regime to another was caused by only modest changes in the pattern of insolation within the Holocene interval as well as the interaction of these patterns with large-scale climate phenomena such as monsoons and El Niño/Southern Oscillation (ENSO). During the middle Holocene, some 5,000–7,000 years ago, conditions appear to have been relatively warm—indeed, perhaps warmer than today in some parts of the world and during certain seasons. For this reason, this interval is sometimes referred to as the Mid-Holocene Climatic Optimum. The relative warmth of average near-surface air temperatures at this time, however, is somewhat unclear. Changes in the pattern of insolation favoured warmer summers at higher latitudes in the Northern Hemisphere, but these changes also produced cooler winters in the Northern Hemisphere and relatively cool conditions year-round in the tropics. Any overall hemispheric or global mean temperature changes thus reflected a balance between competing seasonal and regional changes. In fact, recent theoretical climate model studies suggest that global mean temperatures during the middle Holocene were probably 0.2–0.3 °C (0.4–0.5 °F) colder than average late 20th-century conditions. Over subsequent millennia, conditions appear to have cooled relative to middle Holocene levels. This period has sometimes been referred to as the “Neoglacial.” In the middle latitudes this cooling trend was associated with intermittent periods of advancing and retreating mountain glaciers reminiscent of (though far more modest than) the more substantial advance and retreat of the major continental ice sheets of the Pleistocene climate epoch.
http://www.britannica.com/science/global-warming
4.125
Ecliptic coordinate system The ecliptic coordinate system is a celestial coordinate system commonly used for representing the positions and orbits of Solar System objects. Because most planets (except Mercury), and many small Solar System bodies have orbits with small inclinations to the ecliptic, it is convenient to use it as the fundamental plane. The system's origin can be either the center of the Sun or the center of the Earth, its primary direction is towards the vernal (northbound) equinox, and it has a right-handed convention. It may be implemented in spherical coordinates or rectangular coordinates. - 1 Primary direction - 2 Spherical coordinates - 3 Rectangular coordinates - 4 Conversion between celestial coordinate systems - 5 See also - 6 External links - 7 Notes and references The celestial equator and the ecliptic are slowly moving due to perturbing forces on the Earth, therefore the orientation of the primary direction, their intersection at the Northern Hemisphere vernal equinox, is not quite fixed. A slow motion of Earth's axis, precession, causes a slow, continuous turning of the coordinate system westward about the poles of the ecliptic, completing one circuit in about 26,000 years. Superimposed on this is a smaller motion of the ecliptic, and a small oscillation of the Earth's axis, nutation. In order to reference a coordinate system which can be considered as fixed in space, these motions require specification of the equinox of a particular date, known as an epoch, when giving a position in ecliptic coordinates. The three most commonly used are: - Mean equinox of a standard epoch (usually J2000.0, but may include B1950.0, B1900.0, etc.) - is a fixed standard direction, allowing positions established at various dates to be compared directly. - Mean equinox of date - is the intersection of the ecliptic of "date" (that is, the ecliptic in its position at "date") with the mean equator (that is, the equator rotated by precession to its position at "date", but free from the small periodic oscillations of nutation). Commonly used in planetary orbit calculation. - True equinox of date - is the intersection of the ecliptic of "date" with the true equator (that is, the mean equator plus nutation). This is the actual intersection of the two planes at any particular moment, with all motions accounted for. A position in the ecliptic coordinate system is thus typically specified true equinox and ecliptic of date, mean equinox and ecliptic of J2000.0, or similar. Note that there is no "mean ecliptic", as the ecliptic is not subject to small periodic oscillations. |Heliocentric||l||b||r||x, y, z[note 1]| Ecliptic longitude or celestial longitude (symbols: heliocentric , geocentric ) measures the angular distance of an object along the ecliptic from the primary direction. Like right ascension in the equatorial coordinate system, the primary direction (0° ecliptic longitude) points from the Earth towards the Sun at the vernal equinox of the Northern Hemisphere. Because it is a right-handed system, ecliptic longitude is measured positive eastwards in the fundamental plane (the ecliptic) from 0° to 360°. Ecliptic latitude or celestial latitude (symbols: heliocentric , geocentric ), measures the angular distance of an object from the ecliptic towards the north (positive) or south (negative) ecliptic pole. For example, the north ecliptic pole has a celestial latitude of +90°. Distance is also necessary for a complete spherical position (symbols: heliocentric , geocentric ). Different distance units are used for different objects. Within the Solar System, astronomical units are used, and for objects near the Earth, Earth radii or kilometers are used. From antiquity through the 18th century, ecliptic longitude was commonly measured using twelve zodiacal signs, each of 30° longitude, a usage that continues in modern astrology. The signs approximately corresponded to the constellations crossed by the ecliptic. Longitudes were specified in signs, degrees, minutes, and seconds. For example, a longitude of 19° 55′ 58″ is 19.933° east of the start of the sign Leo. Since Leo begins 120° from the vernal equinox, the longitude in modern form is 139° 55′ 58″. In China, ecliptic longitude is measured using 24 Solar terms, each of 15° longitude, and are used by Chinese lunisolar calendars to stay synchronized with the seasons, which is crucial for agrarian societies. A rectangular variant of ecliptic coordinates is often used in orbital calculations and simulations. It has its origin at the center of the Sun (or at the barycenter of the solar system), its fundamental plane in the plane of the ecliptic, and the x axis toward the vernal equinox. The coordinates have a right-handed convention, that is, if one extends their right thumb upward, it simulates the z-axis, their extended index finger the x-axis, and the curl of the other fingers points generally in the direction of the y-axis. These rectangular coordinates are related to the corresponding spherical coordinates by Conversion between celestial coordinate systems Converting Cartesian vectors Conversion from ecliptic coordinates to equatorial coordinates Conversion from equatorial coordinates to ecliptic coordinates where is the obliquity of the ecliptic. - The Ecliptic: the Sun's Annual Path on the Celestial Sphere Durham University Department of Physics - MEASURING THE SKY A Quick Guide to the Celestial Sphere James B. Kaler, University of Illinois Notes and references - Nautical Almanac Office, U.S. Naval Observatory; H.M. Nautical Almanac Office, Royal Greenwich Observatory (1961). Explanatory Supplement to the Astronomical Ephemeris and the American Ephemeris and Nautical Almanac. H.M. Stationery Office, London. pp. 24–27. Cite uses deprecated parameter - Explanatory Supplement (1961), pp. 20, 28 - U.S. Naval Observatory, Nautical Almanac Office (1992). P. Kenneth Seidelmann, ed. Explanatory Supplement to the Astronomical Almanac. University Science Books, Mill Valley, CA. pp. 11–13. ISBN 0-935702-68-7. - Meeus, Jean (1991). Astronomical Algorithms. Willmann-Bell, Inc., Richmond, VA. p. 137. ISBN 0-943396-35-2. - Explanatory Supplement (1961), sec. 1G - Leadbetter, Charles (1742). A Compleat System of Astronomy. J. Wilcox, London. p. 94., at Google books; numerous examples of this notation appear throughout the book. - Explanatory Supplement (1961), pp. 20, 27 - Explanatory Supplement (1992), pp. 555-558
https://en.wikipedia.org/wiki/Celestial_latitude
4.28125
Demo Case Study FOR SECONDARY STUDENTS How have Indigenous people's citizenship rights changed over time? Case Study Overview Students explore the evidence to critically discuss the issue of Australians’ attitudes to Indigenous rights and racial equality. They explore how the apparent racism revealed by the 1965 Freedom Ride in places such as Walgett and Moree can be reconciled with the overwhelmingly positive example of the 1967 referendum. Or how the apparent hostility of many towards the Aboriginal Tent Embassy in 1972 can be reconciled with the awarding of equal pay to Aboriginal pastoral workers in 1966 and the adoption of the Racial Discrimination Act in 1975. The case study also compares the Yirrkala people’s claim to legal ownership of their land in 1971 and the Mabo case in 1992. An interactive entitled, The 1967 Referendum, is also available for this case study. Case Study unit of work inquiry structure (pdf) - Teacher’s Guide - Activity 1: Focusing on rights Understanding the main concept(s) raised in the case study - Activity 2: Video visit Looking at the video segment of this case study and answering questions about it - Activity 3: Decision makers How have Indigenous people’s rights developed over time? Decision maker - Activity 4: Case Study 1: 1967 Referendum An historical case study for analysis and discussion by the whole class - Activity 4: Case Study 2: Mutual obligation Two contrasting case studies About the Interactive The 1967 Referendum — What do they tell us about Australian attitudes? Students decide whether the 1967 Referendum should be included in a ‘human rights hall of fame’ by looking at a range of evidence and a range of different views. Does it deserve a place along with such events as women obtaining the right to vote in Australia in 1902 and the Mabo High Court decision of 1992? HISTORY YEAR 6 • Australia as a nation HISTORY YEAR 10 • Rights and freedoms (1945 - present)
https://www.australianhistorymysteries.info/demo/secondary.php
4.21875
Tool use is so rare in the animal kingdom that it was once believed to be a uniquely human trait. While it is now known that some non-human animal species can use tools for foraging, the rarity of this behaviour remains a puzzle. It is generally assumed that tool use played a key role in human evolution, so understanding this behaviour's ecological context, and its evolutionary roots, is of major scientific interest. A project led by researchers from the Universities of Oxford and Exeter examined the ecological significance of tool use in New Caledonian crows, a species renowned for its sophisticated tool-use behaviour. The scientists found that a substantial amount of the crows' energy intake comes from tool-derived food, highlighting the nutritional significance of their remarkable tool-use skills. A report of the research appears in this week's Science. To trace the evolutionary origins of specific behaviours, scientists usually compare the ecologies and life histories of those species that exhibit the trait of interest, searching for common patterns and themes. "Unfortunately, this powerful technique cannot be used for studying the evolution of tool use, because there are simply too few species that are known to show this behaviour in the wild," says Dr Christian Rutz from Oxford University's Department of Zoology, who led the project. But, as he explains further, some light can still be shed on this intriguing question. "Examining the ecological context, and adaptive significance, of a species' tool-use behaviour under contemporary conditions can uncover the selection pressures that currently maintain the behaviour, and may even point to those that fostered its evolution in the past. This was the rationale of our study on New Caledonian crows." Observing New Caledonian crows in the wild, on their home island in the South Pacific, is extremely difficult, because they are easily disturbed and live in densely forested, mountainous terrain. To gather quantitative data on the foraging behaviour and diet composition of individual crows, the scientists came up with an unconventional study approach. New Caledonian crows consume a range of foods, but require tools to extract wood-boring longhorn beetle larvae from their burrows. These larvae, with their unusual diet, have a distinct chemical fingerprint--their stable isotope profile--that can be traced in the crows' feathers and blood, enabling efficient sample collection with little or no harm to the birds. "By comparing the stable isotope profiles of the crows' tissues with those of their putative food sources, we could estimate the proportion of larvae in crow diet, providing a powerful proxy for individual tool-use dependence," explains Dr Rutz. The analysis of the samples presented further challenges. Dr Stuart Bearhop from Exeter University's School of Biosciences, who led the stable-isotope analyses, points out: "These crows are opportunistic foragers, and eat a range of different foods. The approach we used is very similar to that employed by forensic scientists trying to solve crimes, and has even appeared on CSI. We have developed very powerful statistical models that enabled us to use the unique fingerprints, or stable isotope profiles, of each food type to estimate the amount of beetle larvae consumed by individual New Caledonian crows." The scientists found that beetle larvae are so energy rich, and full of fat, that just a few specimens can satisfy a crow's daily energy requirements, demonstrating that competent tool users can enjoy substantial rewards. "Our results show that tool use provides New Caledonian crows with access to an extremely profitable food source that is not easily exploited by beak alone," says Dr Rutz. And, Dr Bearhop adds: "This suggests that unusual foraging opportunities on the remote, tropical island of New Caledonia selected for, and currently maintain, these crows' sophisticated tool technology. Other factors have probably played a role, too, but at least we now have a much better understanding of the dietary significance of this remarkable behaviour." The scientists believe that their novel methodological approach could prove key to investigating in the future whether particularly proficient tool users, with their privileged access to larvae, produce offspring of superior body condition, and whether a larva-rich diet has lasting effects on future survival and reproduction. "The fact that we can estimate the importance of tool use from a small tissue sample opens up exciting possibilities. This approach may even be suitable for studying other animal tool users, like chimpanzees," speculates Dr Rutz. For more information contact Dr Christian Rutz (phone: +44 (0)1865 271179 or +44 (0)7792851538; e-mail: [email protected]) or Dr Stuart Bearhop (phone: +44 (0)1326 371835 or +44 (0)7881818150; e-mail: [email protected]). Alternatively, contact the press offices of the University of Oxford (phone: +44 (0)1865 283877; e-mail: [email protected]) or the University of Exeter (phone: +44 (0)1392 722062; e-mail: [email protected]). NOTES TO EDITORS A report of the research, entitled 'The ecological significance of tool use in New Caledonian crows' is to be published in Science on Friday, 17 September 2010 (authors: Christian Rutz, Lucas A. Bluff, Nicola Reed, Jolyon Troscianko, Jason Newton, Richard Inger, Alex Kacelnik, Stuart Bearhop). The researchers studied the New Caledonian crow (Corvus moneduloides), a species that has attracted attention with its unusually sophisticated use of tools for extracting invertebrates from holes and crevices. The species is endemic to the tropical island of New Caledonia in the South Pacific, where fieldwork was conducted. New Caledonian crows use stick tools to probe for longhorn beetle larvae (Agrianome fairmairei) in decaying trunks of candlenut trees (Aleurites moluccana). The larva-extraction technique of crows relies on exploiting defensive responses of their prey, similar to the well-known 'termite fishing' of chimpanzees. Crows insert a twig or leaf stem into a burrow, 'teasing' the larva by repeatedly poking it with the tool until it bites the tip of the tool with its powerful mandibles, and can be levered out. The use of stable isotopes to examine the diets of wild animals is a well-established research technique. It relies on the premise "you are what you eat". Thus, the unique stable isotope profile of a food source can often be traced in the tissues of a consumer. Using relatively simple conversion factors (and some assumptions), it is possible to use this information to calculate the amount of any given food type in the diet of an animal. The Exeter-based research group has recently been involved in developing powerful Bayesian analysis techniques that are suitable for estimating animal diets in more complex situations, for example when consumers are known to eat many different food types. This advance was key to their collaboration with the Oxford-based scientists, who study the ecology and behaviour of the New Caledonian crow - a species that, like many other crows and ravens, is an opportunistic, generalist forager. Previous studies on New Caledonian crows have shown that: wild crows manufacture and use at least three distinct tool types (including the most sophisticated animal tool yet discovered); the species has a strong genetic predisposition for basic stick-tool use (tool-related behaviour emerges in juvenile crows that had no opportunity to learn from others); crows have a preferred way of holding their tools (comparable to the way that humans are either left- or right-handed); adult crows can make or select tools of the appropriate length or diameter for experimental tasks; at least some birds can 'creatively' solve novel problems; and wild crows may socially transmit certain aspects of their tool-use behaviour (but claims for 'crow tool cultures' are still contentious). An earlier paper in Science by Dr Christian Rutz's team (published in 2007) described the use of miniaturized, animal-borne video cameras to study the undisturbed foraging behaviour of wild, free-ranging New Caledonian crows. This work was funded by the UK's Biotechnology and Biological Sciences Research Council (BBSRC) and Natural Environment Research Council (NERC). Dr Christian Rutz is a BBSRC David Phillips Fellow at the Department of Zoology, University of Oxford, and Dr Stuart Bearhop is a Senior Lecturer in the School of Biosciences, University of Exeter. Stable isotope measurements were carried out by Dr Jason Newton, Senior Research Fellow and Manager of the NERC Life Science Mass Spectrometry Facility in East Kilbride. The Facility exists to provide access for UK scientists in the biological, environmental and other sciences to training and research facilities, offering an integrated and comprehensive suite of stable isotope techniques and expertise.
http://www.eurekalert.org/pub_releases/2010-09/uoe-fff091410.php
4.0625
How to Think Like a Computer Scientist: Learning with Python 2nd Edition/Case Study: Catch< How to Think Like a Computer Scientist: Learning with Python 2nd Edition - 1 Case Study: Catch - 1.1 Getting started - 1.2 Using while to move a ball - 1.3 Varying the pitches - 1.4 Making the ball bounce - 1.5 The break statement - 1.6 Responding to the keyboard - 1.7 Checking for collisions - 1.8 Putting the pieces together - 1.9 Displaying text - 1.10 Abstraction - 1.11 Glossary - 1.12 Exercises - 1.13 Project: pong.py Case Study: CatchEdit In our first case study we will build a small video game using the facilities in the GASP package. The game will shoot a ball across a window from left to right and you will manipulate a mitt at the right side of the window to catch it. Using while to move a ballEdit while statements can be used with gasp to add motion to a program. The following program moves a black ball across an 800 x 600 pixel graphics canvas. Add this to a file named pitch.py: As the ball moves across the screen, you will see a graphics window that looks like this: GASP ball on yellow background Trace the first few iterations of this program to be sure you see what is happening to the variables x and y. Some new things to learn about GASP from this example: - begin_graphics can take arguments for width, height, title, and background color of the graphics canvas. - set_speed can takes a frame rate in frames per second. - Adding filled=True to Circle(...) makes the resulting circle solid. - ball = Circle stores the circle (we will talk later about what a circle actually is) in a variable named ball so that it can be referenced later. - The move_to function in GASP allows a programmer to pass in a shape (the ball in this case) and a location, and moves the shape to that location. - The update_when function is used to delay the action in a gasp program util a specified event occurs. The event 'next_tick' waits until the next frame, for an amount of time determined by the frame rate set with set_speed. Other valid arguments for update_when are 'key_pressed' and 'mouse_clicked'. Varying the pitchesEdit To make our game more interesting, we want to be able to vary the speed and direction of the ball. GASP has a function, random_between(low, high), that returns a random integer between low and high. To see how this works, run the following program: Each time the function is called a more or less random integer is chosen between -5 and 5. When we ran this program we got: -2 -1 -4 1 -2 3 -5 -3 4 -5 You will probably get a different sequence of numbers. Let's use random_between to vary the direction of the ball. Replace the line in pitch.py that assigns 1 to y: with an assignment to a random number between -4 and 4: Making the ball bounceEdit Running this new version of the program, you will notice that ball frequently goes off either the top or bottom edges of the screen before it completes its journey. To prevent this, let's make the ball bounce off the edges by changing the sign of dy and sending the ball back in the opposite verticle direction. Add the following as the first line of the body of the while loop in pitch.py: Run the program several times to see how it behaves. The break statementEdit The break statement is used to immediately leave the body of a loop. The following program impliments simple simple guessing game: Using a break statement, we can rewrite this program to eliminate the duplication of the input statement: This program makes use of the mathematical law of trichotomy (given real numbers a and b, a > b, a < b, or a = b). While both versions of the program are 15 lines long, it could be argued that the logic in the second version is clearer. Put this program in a file named guess.py. Responding to the keyboardEdit The following program creates a circle (or mitt ) which responds to keyboard input. Pressing the j or k keys moves the mitt up and down, respectively. Add this to a file named mitt.py: Run mitt.py, pressing j and k to move up and down the screen. Checking for collisionsEdit The following program moves two balls toward each other from opposite sides of the screen. When they collide , both balls disappear and the program ends: Put this program in a file named collide.py and run it. Putting the pieces togetherEdit In order to combine the moving ball, moving mitt, and collision detection, we need a single while loop that does each of these things in turn: Put this program in a file named catch.py and run it several times. Be sure to catch the ball on some runs and miss it on others. This program displays scores for both a player and the computer on the graphics screen. It generates a random number of 0 or 1 (like flipping a coin) and adds a point to the player if the value is 1 and to the computer if it is not. It then updates the display on the screen. Put this program in a file named scores.py and run it. We can now modify catch.py to diplay the winner. Immediately after the if ball_x > 810: conditional, add the following: It is left as an excercise to display when the player wins. Our program is getting a bit complex. To make matters worse, we are about to increase its complexity. The next stage of development requires a nested loop. The outer loop will handle repeating rounds of play until either the player or the computer reaches a winning score. The inner loop will be the one we already have, which plays a single round, moving the ball and mitt, and determining if a catch or a miss has occured. Research suggests there is are clear limits to our ability to process cognitive tasks (see George A. Miller's The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information_). The more complex a program becomes, the more difficult it is for even an experienced programmer to develop and maintain. To handle increasing complexity, we can wrap groups of related statements in functions, using abstraction to hide program details. This allows us to mentally treat a group of programming statements as a single concept, freeing up mental bandwidth for further tasks. The ability to use abstraction is one of the most powerful ideas in computer programming. Here is a completed version of catch.py: Some new things to learn from this example: Following good organizational practices makes programs easier to read. Use the following organization in your programs: - global constants - function definitions - main body of the program - Symbolic constants like COMPUTER_WINS, PLAYER_WINS, and QUIT can be used to enhance readability of the program. It is customary to name constants with all capital letters. In Python it is up to the programmer to never assign a new value to a constant , since the language does not provide an easy way to enforce this (many other programming languages do). - We took the version of the program developed in section 8.8 and wrapped it in a function named play_round(). play_round makes use of the constants defined at the top of the program. It is much easier to remember COMPUTER_WINS than it is the arbitrary numeric value assigned to it. - A new function, play_game(), creates variables for player_score and comp_score. Using a while loop, it repeatedly calls play_round, checking the result of each call and updating the score appropriately. Finally, when either the player or computer reach 5 points, play_game returns the winner to the main body of the program, which then displays the winner and then quits. There are two variables named result---one in the play_game function and one in the main body of the program. While they have the same name, they are in different namespaces, and bear no relation to each other. Each function creates its own namespace, and names defined within the body of the function are not visible to code outside the function body. Namespaces will be discussed in greater detail in the next chapter. - What happens when you press the key while running mitt.py? List the two lines from the program that produce this behavior and explain how they work. - What is the name of the counter variable in guess.py? With a proper strategy, the maximum number of guesses required to arrive at the correct number should be 11. What is this strategy? - What happens when the mitt in mitt.py gets to the top or bottom of the graphics window? List the lines from the program that control this behavior and explain in detail how they work. - Change the value of ball1_dx in collide.py to 2. How does the program behave differently? Now change ball1_dx back to 4 and set ball2_dx to -2. Explain in detail how these changes effect the behavior of the program. - Comment out (put a # in front of the statement) the break statement in collide.py. Do you notice any change in the behavior of the program? Now also comment out the remove_from_screen(ball1) statement. What happens now? Experiment with commenting and uncommenting the two remove_from_screen statements and the break statement until you can describe specifically how these statement work together to produce the desired behavior in the program. Where can you add the linesto the version of catch.py in section 8.8 so that the program displays this message when the ball is caught? - Trace the flow of execution in the final version of catch.py when you press the escape key during the execution of play_round. What happens when you press this key? Why? - List the main body of the final version of catch.py. Describe in detail what each line of code does. Which statement calls the function that starts the game? - Identify the function responsible for displaying the ball and the mitt. What other operations are provided by this function? Which function keeps track of the score? Is this also the function that displays the score? Justify your answer by discussing specific parts of the code which implement these operations. Pong_ was one of the first commercial video games. With a capital P it is a registered trademark, but pong is used to refer any of the table tennis like paddle and ball video games. catch.py already contains all the programming tools we need to develop our own version of pong. Incrementally changing catch.py into pong.py is the goal of this project, which you will accomplish by completing the following series of exercises: - Copy catch.py to pong1.py and change the ball into a paddle by using Box instead of the Circle. You can look at Appendix A for more information on Box. Make the adjustments needed to keep the paddle on the screen. Copy pong1.py to pong2.py. Replace the distance function with a boolean function hit(bx, by, r, px, py, h) that returns True when the vertical coordinate of the ball (by) is between the bottom and top of the paddle, and the horizontal location of the ball (bx) is less than or equal to the radius (r) away from the front of the paddle. Use hit to determine when the ball hits the paddle, and make the ball bounce back in the opposite horizontal direction when hit returns True. Your completed function should pass these doctests:Finally, change the scoring logic to give the player a point when the ball goes off the screen on the left. Copy pong2.py to pong3.py. Add a new paddle on the left side of the screen which moves up when 'a' is pressed and down when 's' is pressed. Change the starting point for the ball to the center of the screen, (400, 300), and make it randomly move to the left or right at the start of each round.
https://en.m.wikibooks.org/wiki/How_to_Think_Like_a_Computer_Scientist:_Learning_with_Python_2nd_Edition/Case_Study:_Catch
4.03125
Respect - a way of life respect; regard; caring; working; together; honesty; rights; tolerance; discrimination; honesty ; What is respect? At the start of the school year, did you spend some time in your class talking about how the class would work? If you did, I guess that you talked about some really great values like honesty, sharing and helping, responsibility, collaborating (or working together), organisation, and respect. When you think about it, respect is probably the most important. Respect has several meanings. - Having regard for others. That means accepting that other people are different but just as important as you feel you are. Some people may call this tolerance (say tol-er-ans) - Having a proper respect for yourself. That means that you stand up for yourself and don't let yourself be talked into doing stuff that you know is wrong or makes you feel uncomfortable. - Not interfering with others (or their property.) - To consider something worthy of high regard. That really means taking all those other values and living them. Home is the place where you first learn about respect. - You learn about using good manners, like saying 'please' and 'thank you'. - You learn to share things like toys, games and food with other people in your family. - You learn to look after your own things and take care of other things in the house (eg. not jumping on furniture, and wiping your feet etc, so that the house is a good place for everyone to be). - You learn to wait your turn in talking. - You learn to listen. - You learn to understand that you will not always get what you want. - You learn to respect others by helping with chores and not letting the family down. - You learn to respect others in the community where you live. - You learn how to talk to different adults in a way they expect to be spoken to eg grandma and her friends may not like to be called by their first name. When you go to school you will have to learn some different ways to respect others and yourself. - You will learn how to be a member of a class. - You will learn how to behave with teachers and other 'school adults'. - You learn to respect and keep school rules, which help to make your school a safe and caring place for everyone. - You will learn to respect the property of classmates and the school. - You will meet with people from different backgrounds, maybe different countries, cultures and religions. - Some people will look very different to you and your family. - Some people will behave very differently to you and your family. - You can respect their differences and expect that they will respect yours. If people are behaving badly towards you and hurting you or your feelings, then you cannot, and must not, respect their unkind behaviour. Bullying and harassment should never be tolerated. And of course you will not behave in an unkind way towards others, including spreading nasty rumours or gossip. See our topic Dealing with bullies for some ideas on how to deal with this behaviour. respect for yourself Earning respect from yourself is probably harder than earning respect from others. Remember those values again? - If you aim to be an honest, caring person who accepts that everyone is different, always tries hard and is willing to share and help others, then living up to your aims can be very difficult. - Don't give yourself too hard a time if you sometimes make mistakes. Mistakes are what we learn from. - Earning respect from others is easy if you live by the values we talked about at the beginning of this topic. People will soon know that you are the kind of person who can be trusted to do the right thing, behave in a caring way and respect others' rights to be themselves. Equity for everyone Say sorry, please and thank you People deserve respect Ensure that everyone's rights are respected Carry respect into all of your life Take time to respect yourself Kim and Kate say "Make respect part of your life. As you grow older and move out more into the world you will meet lots of different people. We live in a very diverse society and if you have learned to respect others then you will be able to fit in well with that society." Check out the Related topics list under the Feedback button to find out more about why Respect should always be a way of life. Outside everyone is different Inside we're just the same. Everyone has feelings. How many can you name? The way that you treat others Is the way that they'll treat you. So respect each other's differences And they'll respect yours too. We've provided this information to help you to understand important things about staying healthy and happy. However, if you feel sick or unhappy, it is important to tell your mum or dad, a teacher or another grown-up.
http://www.cyh.com/HealthTopics/HealthTopicDetailsKids.aspx?p=335&np=287&id=2356
4.15625
Definition - What does Transport Layer mean? The transport layer is the layer in the open system interconnection (OSI) model responsible for end-to-end communication over a network. It provides logical communication between application processes running on different hosts within a layered architecture of protocols and other network components. The transport layer is also responsible for the management of error correction, providing quality and reliability to the end user. This layer enables the host to send and receive error corrected data, packets or messages over a network and is the network component that allows multiplexing. In the OSI model, the transport layer is the fourth layer of this network structure. Techopedia explains Transport Layer Transport layers work transparently within the layers above to deliver and receive data without errors. The send side breaks application messages into segments and passes them on to the network layer. The receiving side then reassembles segments into messages and passes them to the application layer. The transport layer can provide some or all of the following services: - Connection-Oriented Communication: Devices at the end-points of a network communication establish a handshake protocol to ensure a connection is robust before data is exchanged. The weakness of this method is that for each delivered message, there is a requirement for an acknowledgment, adding considerable network load compared to self-error-correcting packets. The repeated requests cause significant slowdown of network speed when defective byte streams or datagrams are sent. - Same Order Delivery: Ensures that packets are always delivered in strict sequence. Although the network layer is responsible, the transport layer can fix any discrepancies in sequence caused by packet drops or device interruption. - Data Integrity: Using checksums, the data integrity across all the delivery layers can be ensured. These checksums guarantee that the data transmitted is the same as the data received through repeated attempts made by other layers to have missing data resent. - Flow Control: Devices at each end of a network connection often have no way of knowing each other's capabilities in terms of data throughput and can therefore send data faster than the receiving device is able to buffer or process it. In these cases, buffer overruns can cause complete communication breakdowns. Conversely, if the receiving device is not receiving data fast enough, this causes a buffer underrun, which may well cause an unnecessary reduction in network performance. - Traffic Control: Digital communications networks are subject to bandwidth and processing speed restrictions, which can mean a huge amount of potential for data congestion on the network. This network congestion can affect almost every part of a network. The transport layer can identify the symptoms of overloaded nodes and reduced flow rates. - Multiplexing: The transmission of multiple packet streams from unrelated applications or other sources (multiplexing) across a network requires some very dedicated control mechanisms, which are found in the transport layer. This multiplexing allows the use of simultaneous applications over a network such as when different internet browsers are opened on the same computer. In the OSI model, multiplexing is handled in the service layer. - Byte orientation: Some applications prefer to receive byte streams instead of packets; the transport layer allows for the transmission of byte-oriented data streams if required. Join thousands of others with our weekly newsletter 3 Amazing Management Tools -- All Free: Free 30 Day Trial – VMTurbo Operations Manager:
https://www.techopedia.com/definition/9760/transport-layer
4.03125
Daily in March and April 11:15 am and 2:15 pm Children are natural mathematicians. They eagerly embrace math as they sort, count, measure, compare, match and problem solve in their everyday play. And that free-form fun translates into an early grasp of math concepts that builds lifelong math success! Throughout March and April, we're doing up math BIG—and small. Join us every day for different math activities calculated to inspire a love of math in every child. Here are some highlights: Size, shape, direction and position are basic concepts of geometry. In this activity, kids use geometric thinking to solve a puzzle or form a cat, sailboat, square or other "picture" using the same seven shapes. Skill Sets: Geometry, spatial thinking Spatial thinking is essential not only for school success, particularly in the STEM areas, but also for everyday life—assembling a model airplane, navigating a new town, remembering where the car is parked. Kids can transform a 2D greeting card into a 3D, lidded box, and see math magic at work! Skill Sets: 2D, 3D, diagonal, center, size, shape Make Your Own Play dough Recipes are ideal for introducing children to early math concepts, such as measurement, counting and sequence. Measure, measure, mix, mix—voila, you've just made play dough! Skill Sets: Counting, measuring, sequence Children can sort by color, create patterns and become familiar with basic shapes as they engage in a simple hammering activity. They’ll build spatial awareness and muscles at the same time! Skill Sets: Sets, attributes, shapes, patterns Early math concepts lay the groundwork for scientific inquiry. Children can estimate, test, measure and compare which shapes fly the farthest using our powerful wind machine. Skill Sets: Shape, size, measurement, farther, closer, comparison, estimation Numbers gain meaning when they're represented by real objects. Kids can measure the circumference of their heads, and then see how many quarters it takes to make it all the way around! Skill Sets: Number sense, measurement, comparing, adding Make a Pattern Patterns are sequences with an underlying rule. Children can use stamps to make a pattern. See if you or a sibling can crack the code to figure out what comes next. Then switch places! Skill Sets: Making patterns, identifying and creating rules Silly Pets in a Pen Children incorporate geometric, sequencing, and counting principles to connect all four sides of a square to create a pen for a pretend pet. Skill Sets: Shapes, counting, comparing, sets Fraction Action Pizzeria An early understanding of fractions paves the way for more complex math functions. The best way to communicate fraction sense to young children is visually, using concrete, familiar objects. Enter the pizza! Kids will learn the basics of fractions with the aid of our giant pizza pie. Skill Sets: Fractions, matching, whole, part Family Math Day Thursday, March 31, 1-7 pm Join us for an afternoon of fun, hands-on math activities designed for the whole family! Visitors will have the opportunity to play some tantalizingly tricky math games, centered on ancient mathematical games from around the world. This all-ages program is free with museum admission Family Math Day is made possible by MIND Research Institue's MathMINDS Initiative. Enhancing Math Talk in Classrooms and Homes Exploring the Math in Play Early Math Guide for Parents of Preschoolers Math at Play Videos
http://www.chicagochildrensmuseum.org/index.php/experience/cardboard-adventures-in-cardboard
4.375
If you're seeing this message, it means we're having trouble loading external resources for Khan Academy. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Relationships can be any association between sets of numbers while functions have only one output for a given input. This tutorial works through a bunch of examples of testing whether something is a valid function. As always, we really encourage you to pause the videos and try the problems before Sal does! Common Core Standard: 8.F.A.1 Testing if a relationship is a function Relations and Functions Sal checks whether a given set of points can represent a function. For the set to represent a function, each domain element must have one corresponding range element at most. Sal checks whether a table of people and their heights can represent a function that assigns a height to a name. Sal checks whether y can be described as a function of x if y is always three more than twice x. Sal explains why a vertical line *doesn't* represent a function. Determine whether a table of values of a relationship represents a function. Sal checks whether a description of the price of an order can be represented as a function of the shipping cost. Determine whether a given graph represents a function.
https://www.khanacademy.org/math/cc-eighth-grade-math/cc-8th-linear-equations-functions/cc-8th-function-intro
4.34375
1 Answer | Add Yours The three-dimensional coordinate system has three mutually perpendicular axes, x, y and z. Here, a point needs three coordinates to be defined completely. The coordinates of the point P is expressed as P (x,y,z). Plotting P in 3D is somewhat similar to plotting P in two dimensions, only an extra axis and its coordinate have to be kept in mind. Consider the point P’(2,-3,4). Draw the three coordinate axes (refer to attached image), and note the positive and negative directions of all the axes. To plot the point (2,-3,4) notice that x=2, y=-3 and z=4. To help visualize the point, first locate the point (2,-3) in the xy-plane. It is represented by a cross in the attached image. The point P’(2,-3,4) will be 4 units above the cross, along the z-axis (represented by the bold circle in the image). We’ve answered 301,482 questions. We can answer yours, too.Ask a question
http://www.enotes.com/homework-help/how-do-you-pot-point-3d-plane-441851
4.03125
Eye Level Math helps improve problem-solving skills by enabling them to master concepts through a small step approach. Development of Mathematical Thinking Basic Thinking Math enables students to complete the foundation of mathematics and covers the following study areas: Numbers, Arithmetic, Measurement, and Equations. Critical Thinking Math enables students to develop depth perception, problem solving, reasoning skills, and covers the following study areas: Patterns and Relationships, Geometry, Measurement, Problem Solving, and Reasoning. - Mastery learning with BTM repetition - Maximization of motivation by online program - Maximization of learning effect by using auto scoring and instant feedback system - Arithmetic game activity - Easy access on accumulated records - Simultaneous learning of BTM & CTM - Learning of new concepts - Improvement of problem solving skill with various supplementary materials - Well systemized assessment What are the benefits of Eye Level Math? 1. Systematic study materials for all levels Eye Level Math uses a systematic curriculum which is divided into various levels according to student abilities. This allows students to fully understand and master the required mathematical concepts in a progressive manner. 2. Study materials that develop the ability to solve problems independently The Eye Level curriculum is progressive. Subtle increases in difficulty in each level makes it easy for all students to learn. This allows students to become comfortable with all necessary concepts before proceeding to the next level. Students will be able to solve questions that are presented as variations of similar concepts. 3. An interactive teaching methodology that incorporates proactive feedback Eye Level is a proactive learning process. Students receive continual, ongoing feedback from our instructors to enhance the student learning process. Instructors also work with parents to maximize feedback. Communication is an integral part of education. A positive environment makes learning optimal for all students. In this case, students are able to learn from both their parents and instructors. 4. Eye Level helps students develop their critical and analytical thinking skills. The active use of learning materials creates a learning environment where students will develop critical and analytical thinking skills. This is accomplished through developing depth perception, location and spatial relationship skills, by utilizing our learning materials such as Numerical Figures, Blocks and Shapes, Clear Paper, Colored Blocks, Mirror, and Wooden Blocks. Difficulty and question variations are introduced systematically throughout all levels. 5. Eye Level allows students to utilize their skills in all areas of study. Performing well in Eye Level not only helps students in mathematics, but is also helpful for applying their knowledge to other areas of academic studies. Skills that students will develop in Eye Level are broad. In most cases, students will be ahead of their class and their peers. Ideally, students will advance faster in all areas of academic studies and thus become more confident in all areas of study.
http://www.myeyelevel.com/America/programs/Math.aspx
4.1875
Appendix:Latin cardinal numerals When someone counts items, that person uses cardinal values. In grammatical terms, a cardinal numeral is a word used to represent such a countable quantity. The English words one, two, three, four, etc. are all examples of cardinal numerals. In Latin, most cardinal numerals behave as indeclinable adjectives. They are usually associated with a noun that is counted, but do not change their endings to agree grammatically with that noun. The exceptions are ūnus (“one”), duo (“two”), trēs (“three”), and multiples of centum (“hundred”), all of which decline. Additionally, although mīlle (“thousand”) is an indeclinable adjective in the singular, it becomes a declinable noun in the plural. These exceptions are further explained in later sections. |1||I||ūnus, ūna, ūnum||11||XI||ūndecim||10||X||decem||100||C||centum| |2||II||duo, duae, duo||12||XII||duodecim||20||XX||vīgintī||200||CC||ducentī, -ae, -a| |3||III||trēs, tria||13||XIII||tredecim||30||XXX||trīgintā||300||CCC||trecentī, -ae, -a| |4||IV||quattuor||14||XIV||quattuordecim||40||XL||quadrāgintā||400||CD||quadringentī, -ae, -a| |5||V||quīnque||15||XV||quīndecim||50||L||quīnquāgintā||500||D||quīngentī, -ae, -a| |6||VI||sex||16||XVI||sēdecim||60||LX||sexāgintā||600||DC||sescentī, -ae, -a| |7||VII||septem||17||XVII||septendecim||70||LXX||septuāgintā||700||DCC||septingentī, -ae, -a| |8||VIII||octō||18||XVIII||duodēvīgintī||80||LXXX||octōgintā||800||DCCC||octingentī, -ae, -a| |9||IX||novem||19||XIX||ūndēvīgintī||90||XC||nōnāgintā||900||CM||nōngentī, -ae, -a| The smaller cardinal numerals, from ūnus (“one”) to vīgintī (“twenty”), have spellings and forms that are not easily predictable and therefore must be learned by students of Latin. Larger cardinal numerals follow more regular patterns of assembly. Inflection : The Latin ūnus (“one”) inflects like an irregular first and second declension adjective. The irregularities occur in the singular genitive, which ends in -īus instead of the usual -ī or -ae, and in the singular dative, which ends in -ī instead of the usual -ō or -ae. The choice of ending will agree with the gender of the associated noun: ūnus equus ("one horse"), ūna clāvis ("one key"), ūnum saxum ("one stone"). The ending will also agree with the grammatical case of the associated noun: ūnīus equī (genitive), ūnam clāvem (accusative), ūnī saxō (dative). Plural : Although it may seem strange at first sight, ūnus does have a set of plural forms. These forms are used when the associated noun has a plural form, but an inherently singular meaning. For example, the Latin noun castra (“camp”) occurs only as a plural neuter form and takes plural endings, even though it identifies one object, hence: ūnōrum castrōrum ("of one camp"). Compounds : When ūnus is used to form compound numerals, such as ūnus et vīgintī ("twenty-one"), the case and gender agree with the associated noun, although the singular is used: vīgintī et ūnam fēminās vīdī . Unlike duo and trēs, the word ūnus is almost never used with mīlle (“thousand”) to indicate how many thousand. |gen||duōrum (duûm)||duārum||duōrum (duûm)| |acc||duōs / duo||duās||duo| Inflection : The Latin duo (“two”) has a highly irregular inflection, derived in part from the old Indo-European dual number. While some of the endings resemble those of a first and second declension adjective, others resemble those of a third declension adjective. The choice of ending will agree with the gender of the associated noun, which will necessarily be plural: duo equī ("two horses"), duae clāvēs ("two keys"), duo saxa ("two stones"). The ending will also agree with the grammatical case of the associated noun: duōs equōs (accusative), duārum clāvum (genitive), duōbus saxīs (dative). Compounds : When duo is used to form compound numerals, such as duo et vīgintī or vīgintī duo ("twenty-two"), the case and gender agree with the associated noun. This is also the case when used with the plural of mīlle (“thousand”) to indicate how many thousands: duo mīlia ("two thousands"), duōrum mīlium ("of two thousands"). The choice of ending will agree with the gender of the associated noun, which will necessarily be plural: trēs equī ("three horses"), trēs clāvēs ("three keys"), tria saxa ("three stones"). The ending will also agree with the grammatical case of the associated noun: trēs equōs (accusative), trium clāvum (genitive), tribus saxīs (dative). Compounds : When trēs is used to form compound numerals, such as trēs et vīgintī or vīgintī trēs ("twenty-three"), the case and gender agree with the associated noun. This is also the case when used with the plural of mīlle (“thousand”) to indicate how many thousands: tria mīlia ("three thousands"), trium mīlium ("of three thousands"). IV to XX |1||I||ūnus, ūna, ūnum||11||XI||ūndecim| |2||II||duo, duae, duo||12||XII||duodecim| Many of these numerals are mirrored in English words (such as quadrangle, quintuplet, sextuple, octopus). The numerals for 7 through 10 appear in the English names of months (September, October, November, and December). These months were the seventh through tenth of the Roman calendar, since the Roman year began with mārtius (“March”). Teens : Latin cardinals larger than decem (“ten”) but less than vīgintī (“twenty”) are constructed by addition. The ending -decim (a form of decem) is attached to the numerals ūnūs through novem. The resultant compound carries the same value as the mathematical sum of the components. For example quattuordecim (“fourteen”) is quattuor (“four”) + decem (“ten”). English does much the same by attaching -teen (a form of ten) to smaller numerals, such as the numeral fourteen which is four + ten. In some of these compounds, a spelling and pronunciation change occurs during the attachment, so that sex + decem drops the -x and lengthens the e to yield sēdecim. This kind of change also occurs in English, as in five + ten which softens the sound of the v and drops the e to yield fifteen. Exceptions : There are two exceptions to the general pattern for forming the teens. In Classical Latin, the numerals for 18 and 19 are more frequently written as subtractive compounds. So, although 18 may be written as octōdecim, it is more often written as duodēvīgintī (literally "two from twenty"). Likewise, the numeral for 19 may be written as novemdecim, but is more often encountered as ūndēvīgintī (“one from twenty”). For more information about the subtractive pattern of construction, see the section on "counting backwards". |Multiples of ten| |Multiples of one hundred| |100||C||centum 1||600||DC||sescentī, -ae, -a| |200||CC||ducentī, -ae, -a||700||DCC||septingentī, -ae, -a| |300||CCC||trecentī, -ae, -a||800||DCCC||octingentī, -ae, -a| |400||CD||quadringentī, -ae, -a||900||CM||nōngentī, -ae, -a| |500||D||quīngentī, -ae, -a||1000||M||mīlle, mīlia (mīllia) 2| |1 centum does not inflect. 2 see the following section on mīlle. |C (adj.)||NN (noun)| The Latin mīlle (“thousand”) is irregular in that it has two forms. In the singular, it is an indeclinable adjective, but in the plural it is a noun that declines like a third declension neuter i-stem. Notice that the genitive plural ending is -ium. Singular : In the singular, mīlle (“thousand”) functions as an adjective. This singular form is indeclinable, so its ending will remain the same rather than agree with the case or gender of the associated noun. However, the associated noun will necessarily be plural: mīlle equī ("thousand horses"), mīlle clāvēs ("thousand keys"), mīlle saxa ("thousand stones"). This is true regardless of the case or gender of the associated noun. Plural : In the plural, mīlia functions as a noun, and will inflect according to how it is used in the sentence (subject, direct object, etc.). The associated noun being counted will necessarily be in the genitive plural, and so will not agree with the grammatical case of mīlia. Note that, if the numeral before mīlia is duo or trēs, then it will take a neuter form in the same grammatical case as mīlia : octō mīlia equōrum (nominative, "eight thousand of horses"), cum tribus mīlibus clāvum (ablative, "with three thousand of keys"), duōrum mīlium saxōrum (genitive, "of two thousand of stones"). Latin cardinal numerals larger than vīgintī (“twenty”), that are not multiples of ten, are assembled as compound words. The components of these compounds are the numerals ūnus (“one”) through novem (“nine”) and the multiples of decem (“10”), the multiples of centum (“100”), and mīlle (“1000”). Compound numerals in Latin are assembled by one of two basic methods: additive or subtractive. Most compound numerals are additive, meaning that the value of the compound numeral is calculated by adding the values of the component words. However, a few Latin compound numerals are subtractive, meaning that the value of the compound numeral is calculated by subtracting the values of the component words. A large-valued compound numeral may incorporate both additive and subtractive components. |Tens +8 ( or –2 )||Tens +9 ( or –1 )| Of the Latin compound numerals less than centum (“100”), seventeen are normally subtractive. All of these special cases represent values that are one or two less than a multiple of ten, and have names that subtract from a starting value rather than adding to that value. These seventeen exceptions are displayed in the table at right. Note that the compound numeral for 98 is not among the special cases, but instead is formed in the usual additive way. Subtractive compounds normally are written as single words (with no spaces) and are indeclinable. Numerals representing cardinal values that are eight more (two less) than a multiple of ten are constructed literally as: Thus, the numeral for 38 is normally written as duodēquadrāgintā (“two from forty”), rather than as the expected trīgintā octō (“thirty-eight”) or octō et trīgintā (“eight and thirty”). The latter two additive forms are possible, but are not found in Classical Latin as frequently as the subtractive form. Numerals representing cardinal values that are nine more (one less) than a multiple of ten are constructed literally as: Thus, the numeral for 39 is normally written as ūndēquadrāgintā (“one from forty”), rather than as the expected trīgintā novem (“thirty-nine”) or novem et trīgintā (“nine and thirty”). The latter two additive forms are possible, but are not found in Classical Latin as frequently as the subtractive form. Numbers are almost always treated as adjectives, and often come before the noun. They may be used alone as substantive nouns, but as most are indeclinable, this tends to be ambiguous. Mille behaves differently; in the plural, as milia, the noun being counted must be in the genitive plural. For example, "two thousand soldiers" would be "duo milia militum" (literally, "two thousands of soldiers). Thus a mile is mille passūs (literally, "a thousand paces"), but two miles is duo milia passuum (literally, "two thousands of paces"). To denote one's age, which in English is expressed in the construction I am ... years old, in Latin one would most commonly say Habeo ... annos (literally, "I have ... years"). The numeral is in the accusative plural, if it declines. on Wikipedia.Wikipedia:Roman numerals
https://en.wiktionary.org/wiki/Appendix:Latin_cardinal_numerals
4.15625
Energy How Solar Arrays Are Made A new lab is inventing alternative ways to package and install solar cells. by Kevin Bullis June 21, 2011 Sponsored by Once the cells are sorted by power output, another researcher, Adam Stokes, strings them together with a tool that solders flat strips of metal called busbars to electrical contacts on their front and back. The lab can test different ways to connect the cells, varying factors such as the number and type of busbars and then measuring the resulting performance to determine whether any extra costs are worthwhile. Researchers sandwich a short string of solar cells between glass and a protective film, a process designed to keep the cells dry. This panel will be small enough to fit in one of the specialized chambers the lab uses to test new materials being considered for adoption by the solar industry. A large laminating machine operated by Dan Doble, group leader for the PV Modules Group at Fraunhofer, seals solar cells inside a protective package. To earn back their cost, solar panels must perform well for decades, often under extreme conditions. If even a small amount of water vapor enters the panel, it can corrode contacts and degrade its performance. This chamber can subject solar panels to a wide range of temperatures and humidity levels. It includes a device invented at the Fraunhofer lab that presses on the surface of a panel by inflating a rubber bladder, simulating pressure from a load of snow. Solar power may be associated with warm, sunny climates, but some of the biggest markets are in snowy places such as Germany. Researchers Dan Doble and Carola Völker lower a solar panel into a tank of water to test how well the circuitry within it is sealed. A current of at least 500 volts are applied to the circuits, and an electrical lead in the water detects any current leakage. The test can help determine whether the panels are likely to survive exposure to extreme temperatures and mechanical pressure. The researchers also study micrographs to detect damage. In most solar panels, a hole is cut in a protective envelope surrounding the solar cells to allow a connection to an outside circuit. To speed manufacturing and avoid allowing water to leak in, the lab is developing a device (right) that can be installed before the cells are encapsulated. The yellow tabs can be inserted between a sheet of encapsulant and the cells and sealed in place during a standard lamination step. The cables sticking out of the device are connected to similar cables in neighboring solar panels on a roof before the panels are connected to an inverter and the power grid. In the current design, this is done by hand, but in a future design, the devices will snap together, allowing the panels to be installed quickly and cheaply.
https://www.technologyreview.com/s/424417/how-solar-arrays-are-made/
4.0625
NPS Photo/Sarah Falzarano Acoustic Technician Laura Levy samples sounds from Colorado River rapids. IN FEBRUARY 1919, THE FIRST AIR TOUR over the Grand Canyon was recorded; that fall the area was officially designated as Grand Canyon National Park. Fifty-six years later, the 1975 Grand Canyon National Park Enlargement Act established that where impacts from aviation occur, natural quiet should be protected as both a resource and a value in the park. Following the National Parks Overflights Act of 1987, the Federal Aviation Administration established a special flight rules area for the park. In an effort to restore natural quiet at Grand Canyon and to improve aviation safety, flights were restricted below 14,500 feet, flight-free zones were established, and special routes for commercial sightseeing tours were created. After another 20 years of interim regulations, congressional interest, departmental reports, negotiations and consultation, and the establishment of a National Park Service-Federal Aviation Administration Grand Canyon Working Group, Grand Canyon National Park is finally on the verge of completing an environmental impact statement to achieve substantial restoration of natural quiet at the park. The 1975 Grand Canyon National Park Enlargement Act established that … natural quiet should be protected as both a resource and a value in the park. So where’s the science? In 2003, the park’s Science and Resources Management Program recognized the critical need to establish a soundscape program to collect and analyze local acoustic data. The Grand Canyon Soundscape Program has since played an active support role in park planning to better steward park soundscapes. In support of overflights planning, Grand Canyon staff recorded 12 months of continuous audio data and measured decibel levels under air tour corridors (see photo). These data allowed park managers to determine natural sound levels for winter and summer seasons in four vegetation zones. Because NPS Management Policies states that the natural ambient sound level is the baseline condition or standard for determining impacts to soundscapes, these data provide park managers with essential information needed for soundscape planning in the park. Data were used to compare noise models, assess developed and transitional area soundscapes, and create visual spectrograms for aircraft audibility analysis. In order to assess impacts to the threatened Mexican spotted owl, acoustic data were collected adjacent to breeding sites; data are currently being analyzed using sound analysis software such as Raven (http://www.birds.cornell.edu/brp/raven/RavenOverview.html) to look for correlations between aircraft noise and the disturbance of birds. In addition to overflights monitoring and management, the park has been interested in a variety of other planning and stewardship activities relating to soundscapes. Activities included collection and analysis of acoustic data from river rapids (see photo), fire-fighting equipment (see photo), and popular visitor use areas. Recently, a sound system was deployed at Tusayan Ruins, located near Desert View, to quantify noise from air tours interfering with ranger programs (using the U.S. Environmental Protection Agency criterion for speech interference for interpretive programs). In 2008 and 2009, soundscape staff collaborated with the Grand Canyon Youth program to develop a soundscape-themed science project for visually impaired teenagers. Outdoor recreation planning staff also used acoustic data to determine if helicopters exchanging river trip passengers are complying with Colorado River Management Plan guidelines. Finally, in an effort to support our neighboring parks, Grand Canyon National Park staff established 2007 baseline sound levels at Walnut Canyon National Monument prior to runway expansion at Flagstaff’s Pulliam Airport. While the current focus of the park’s soundscape work relates to overflights planning, park staff hopes to broaden the program across all cultural and natural soundscape issues. Future efforts will include the development of a parkwide soundscape management plan and implementation of the overflights environmental impact statement. For more information and copies of all park reports and publications, please visit our Web site at http://www.nps.gov/grca/naturescience/soundscape.htm. Rodgers, J. 2010. Case Study: Soundscape management at Grand Canyon National Park. Park Science 26(3):46–47. Accessed 11 February 2016 from http://www.nature.nps.gov/ParkScience/index.cfm?ArticleID=351.
http://www.nature.nps.gov/ParkScience/index.cfm?ArticleID=351
4
Epilepsy surgery is a procedure that either removes or isolates the area of your brain where seizures begin. It is a treatment option for people whose seizures are not well controlled with medication. About 30% of people with epilepsy have seizures that are "medically intractable," meaning the seizures continue to happen despite trying 3 or more antiepileptic drugs . People who are considered for surgery undergo extensive testing to locate the source of their seizures and to ensure that removing that region of the brain will not impact their speech, mobility or quality of life [2,3]. What is epilepsy surgery? Epilepsy surgery is a procedure to 1) remove the seizure-producing area of the brain or 2) limit the spread of seizure activity. Surgical results can be considered curative (stopping the seizures) or palliative (restricting the spread of the seizure). The type of surgery performed depends on the type of seizures and where they begin in the brain (Fig 1). Curative procedures, such as lobectomy, cortical excision, or hemispherectomy aim to remove the area of the brain (seizure focus) causing seizures. The goal is to remove all of the seizure focus area without causing loss of brain function. Palliative procedures, such as corpus callosotomy or vagus nerve stimulation (VNS), aim to reduce seizure frequency or severity. Types of epilepsy surgery Curative procedures are performed when tests consistently point to a specific area of the brain where the seizures begin. - Temporal lobectomy is the most common type of surgery for people with temporal lobe epilepsy. It removes a part of the anterior temporal lobe along with the amygdala and hippocampus. A temporal lobectomy leads to a significant reduction or complete seizure control about 70% to 80% of the time [4, 5]. However, memory and language can be affected if this procedure is performed on the dominant hemisphere. - Cortical excision is the second most common type of epilepsy surgery. It removes the outer layer (cortex) of the brain at the seizure focus area. About 40% to 50% of patients have better seizure control. - Hemispherectomy involves the removal of the brain's outer layer (cortex) and anterior temporal lobe on one half of the brain. It is usually performed in children who suffer intractable seizures, have a damaged hemisphere, and experience weakness on one side of the body. Surgery may control seizures for nearly 80% of these patients. Patients often improve in cognitive functioning, attention span, and behavior. Palliative procedures are performed when a seizure focus cannot be determined or it overlaps brain areas critical for movement, speech, or vision. - Corpus callosotomy prevents the spread of generalized seizures from one side of the brain to the other by disconnecting the nerve fibers across the corpus callosum. During surgery the anterior two thirds of the corpus callosum is sectioned. On occasion, a second surgery is performed to cut the posterior one third if the patient does not improve. This surgery is not curative. Rather, it prevents the spread and reduces seizure severity. Some patients experience disconnection syndrome after a complete callosotomy. They may have right-left confusion with motor problems, apathy, or mutism. - Multiple subpial transections create small incisions in the brain to interfere with the spread of seizure impulses. This technique is used when the seizure focus is located in a vital area that cannot be removed. It may be used alone or in combination with a lobectomy. - Vagus nerve stimulation (VNS) involves implantation of a device that produces electrical signals to prevent seizures. VNS is similar to a heart pacemaker. A wire (lead) is wrapped around the vagus nerve in the neck. The wire is connected to a generator-battery implanted under the skin near the collarbone. The generator is programmed to produce intermittent electrical signals that travel along the vagus nerve to the brain. In addition, some patients may turn on the device with a magnet when feeling a warning (aura) that a seizure is about to start. VNS is not a cure for epilepsy, it does not work for everyone, and it does not replace the need for anti-epileptic drugs. This procedure is reserved for those who are not candidates for potentially curative brain surgery. VNS reduces seizure frequency by about 30% (similar to the results of the newer AEDs) . Common side effects are a tingling sensation in the neck and mild hoarseness in the voice, both of which occur only during stimulation. Who is a candidate? Epilepsy surgery may be an option if you have: - seizures that are uncontrolled with medications (intractable) or you have severe side effects to the medications - partial seizures that always start in one area of the brain (localized seizure focus) - seizures that significantly affect your quality of life - seizures caused by a lesion such as scar tissue, a brain tumor, arteriovenous malformation (AVM), or birth defect - seizure discharge that spreads to the whole brain (secondary generalization) Most experts recommend that a patient who continues to have seizures after trials of 2 or 3 different medications should have an evaluation at a comprehensive epilepsy treatment program. likelihood of seizure freedom after failure of 3 different medications is less than 5% [1, 2]. The epilepsy team typically consists of epileptologists (neurologists with special expertise in epilepsy), neurosurgeons, neuropsychologists, epilepsy nurse clinicians, and EEG technicians. Patients are initially evaluated by an epileptologist. A complete medical history and physical exam helps identify critical information, such as age of onset and type of seizures (including frequency, severity, and duration) (see Seizures). A patient’s physical exam is usually normal. However, some asymmetries may be seen related to early development when the structural brain lesions formed. For example, a difference in the size of one hand or foot compared to the other may correlate with atrophy of one of the brain’s hemispheres. The following diagnostic studies may be used during an evaluation for epilepsy surgery. Not all tests are required. The epilepsy team will decide which tests are appropriate. - Continuous video-EEG monitoring requires a hospital stay in an epilepsy monitoring unit. For the EEG, a technician glues electrodes onto your scalp to record the electrical activity of the brain. With safe and continuous monitoring, movement/behavior and EEG activity are captured during a seizure with simultaneous recordings by video camera and electroencephalogram (EEG). Careful analysis of activity and brain waves both during and between seizures can provide critical information about where the seizure starts and spreads. Certain behaviors during seizures, such as abnormal posturing of an arm, or specific speech problems during or after a seizure, help your physician to identify where in the brain the seizure begins. - Magnetic Resonance Imaging (MRI) helps identify structural brain abnormalities that can cause epilepsy. These include hippocampal atrophy, cavernous angiomas, cortical dysplasias, and tumors. - Positron Emission Tomography (PET) allows the doctor to study brain function by observing how glucose (sugar) is metabolized in the brain. A small amount of radioactive glucose is injected into your bloodstream. The PET scanner takes pictures of the brain that are interpreted by a computer to examine glucose metabolism. Glucose use can increase (hypermetabolism) during a seizure and decrease (hypometabolism) when not having a seizure. These results may help locate areas of dysfunctional brain or other abnormalities, which could correspond to EEG localization of epileptogenic activity. - Single-Photon Emission Computed Tomography (SPECT) provides information about blood flow to brain tissue. Analyzing blood flow to the brain may help determine how specific areas are functioning. Blood flow to an area of the brain during a seizure increases, while blood flow to an area of the brain can decrease when a person is not having a seizure. - Neuropsychological testing evaluates your current level of brain functioning, including memory and language. This test might correlate with diagnostic imaging and EEG. - Wada Test (Intracarotid Amytal test) is used to determine which side of your brain is dominant for language and memory function. Identifying the dominant side, the surgeon plans the operation to avoid affecting these functions. The Wada test, which is performed as part of an angiogram, can show any vascular or blood flow problems (see Angiogram). Sodium amytal is a short-acting barbiturate that is injected into the carotid artery on the right or left side. For a short time, the drug puts one half of the brain (hemisphere) to sleep. You cannot move one side of your body and may be unable to speak. Next, you are asked to identify pictures, words, objects, or numbers. After 5 to 10 minutes when the drug wears off, you are asked if you remember what was shown. The Wada test is then repeated on the other side. Used with neuropsychological testing, results of the Wada test help identify memory and language deficits and predict surgical outcome. - Functional MRI (fMRI) is used to determine the location of brain abnormalities in relation to areas of the brain responsible for speech, memory, and movement. FMRI also helps doctors predict the functional outcome of surgical treatment. FMRI is sometimes used instead of a Wada test. Electrical brain mapping, or electrocorticography, are diagnostic tests that may be necessary if the seizure focus is believed to lie close to important functional areas or if the exact location of the seizure focus remains unclear despite standard EEG and other tests. During a craniotomy operation for these diagnostic tests, subdural or depth electrodes are placed directly on or in the brain through a hole in the skull (craniotomy). - Subdural electrodes aligned on a plastic grid, are placed directly on the brain’s surface (Fig 2). Subdural electrodes allow for a wide area of EEG recording as well as cortical mapping of functional areas. - Intracerebral depth electrodes look like a banded stick. These are placed stereotactically deep into the brain tissue, usually the amygdala and hippocampus of the temporal lobe. Depth electrodes are indicated for patients with bitemporal, bifrontal, or frontal temporal seizures. After the electrodes are placed, the wound is completely closed and bandaged. The patient is then moved to the epilepsy monitoring unit (EMU). The EEG technician will connect the electrodes (via wires that pass through small incisions in the skin) to an EEG machine that shows the brain waves and seizure activity. The patient remains in the hospital until sufficient information has been gathered to guide further treatment (typically 5-10 days). If the seizure focus is found and is not in an area of the brain involved in communication, a second surgery may be recommended to remove that brain area. If the seizure focus is not found, the electrodes are removed; follow up consultation with the epileptologist and neurosurgeon will follow. Risks associated with electrical brain mapping include infection and hemorrhage in about 2% to 5% of cases. The surgical decision The epilepsy team meets to review all testing performed to decide if surgery is the best treatment option. All tests should point to a single region in the brain as the source for seizures. If this is the case, and the region of seizure onset is a safe distance away from areas of the brain that control language, movement, and vision, then surgery can be recommend to reduce or eliminate seizures. Who performs epilepsy surgery? Epilepsy surgery is done by a neurosurgeon specifically trained in this field. A patient should have a presurgical evaluation at a comprehensive epilepsy treatment program by a multidisciplinary team of specialists (neurologists, neurosurgeons, neuropsychologists, and nurse clinicians). What happens before surgery? First, in consultation during the office visit, the neurosurgeon will explain the procedure, its risks and benefits, and answer any questions. Next, you will sign consent forms and complete paperwork to inform the surgeon about your medical history (i.e., allergies, medicines, vitamins, bleeding history, anesthesia reactions, previous surgeries). Discuss all medications (prescription, over-the-counter, and herbal supplements) you are taking with your health care provider. Some medications need to be continued or stopped the day of surgery. You may be scheduled for presurgical tests (e.g., blood test, electrocardiogram, chest X-ray) several days before surgery. Stop taking all non-steroidal anti-inflammatory medicines (Naprosyn, Advil, Motrin, Nuprin, Aleve) and blood thinners (coumadin, Plavix, aspirin) 1 week before surgery. Additionally, stop smoking and chewing tobacco 1 week before and 2 weeks after surgery as these activities can cause bleeding problems. No food or drink is permitted past midnight the night before surgery. Morning of surgery - Shower using antibacterial soap. Dress in freshly washed, loose-fitting clothing. - Wear flat-heeled shoes with closed backs. - If you have instructions to take regular medication the morning of surgery, do so with small sips of water. - Remove make-up, hairpins, contacts, body piercings, nail polish, etc. - Leave all valuables and jewelry at home (including wedding bands). - Bring a list of medications (prescriptions, over-the-counter, and herbal supplements) with dosages and the times of day usually taken. - Bring a list of allergies to medication or foods. - Take your AED medication as usual. Arrive at the hospital 2 hours before your scheduled surgery time to complete the necessary paperwork and pre-procedure work-ups. You will meet with a nurse who will ask your name, date of birth, and what procedure you’re having. They will explain the pre-operative process and discuss any questions you may have. An intravenous (IV) line will be placed in your arm. An anesthesiologist will talk with you and explain the effects of anesthesia and its risks. What happens during surgery? There are five main steps to the anterior temporal lobectomy. The surgery generally takes 3 to 4 hours. Step 1: prepare the patient You will lie on your back on the operative table and be given anesthesia. Once asleep, your head is placed in a skull fixation device attached to the table that holds your head in position during the surgery. Depending on where the incision will be made, your hair may be shaved. Step 2: perform a craniotomy After your scalp is prepped, the surgeon will make a skin incision to expose the skull. A circular opening in the skull, called a craniotomy, is drilled (see Craniotomy) (Fig 3). This bony opening exposes the protective covering of the brain, called the dura mater, which is opened with scissors. Step 3: perform brain mapping Depending on your specific case, intraoperative EEG recording and stimulation with subdural electrodes may be performed to map brain areas (Fig. 4), or reconfirm the epileptic zone, particularly how much of the lateral temporal cortex is involved. Using a small electrical probe, the surgeon tests locations on the brain’s surface one after another to create a map of functions. During mapping, areas involved with movement can be identified electrically even if the patient is under anesthesia. However, to map areas such as language, sensation, or vision, the patient is awakened to be able to communicate with the surgeon. Local anesthesia and numbing agents are given so you won’t feel any pain. Step 4: remove the seizure focus area Looking through an operative microscope, the surgeon gently retracts the brain and opens a corridor to the seizure focus area. The surgeon then removes that area of brain where seizures occur. Step 5: close the craniotomy The retractors are removed and the dura is closed with sutures. The bone flap is replaced and secured to the skull with titanium plates and screws. The muscles and skin are sutured back together. What happens after surgery? After surgery you'll be taken to the recovery room, where vital signs are monitored as you awake from anesthesia. You'll be transferred to the neuroscience intensive care unit (NSICU) for overnight observation and monitoring. Pain medication will be given as needed. If you experience nausea and headache after surgery, medication can be given to control these symptoms. Once your condition is stable, you will be moved to a room on the Neuroscience floor where you will stay for about 1 to 3 days.If you had a VNS implanted, you may go home after recovery from anesthesia. It is important to work with your neurologist to adjust your medications and refine the programming of the neurostimulator. - After surgery, headache pain is managed with narcotic medication. Because narcotic pain pills are addictive, they are used for a limited period (2 to 4 weeks). Their regular use may also cause constipation, so drink lots of water and eat high fiber foods. Laxatives (e.g., Dulcolax, Senokot, Milk of Magnesia) may be bought without a prescription. Thereafter, pain is managed with acetaminophen (e.g., Tylenol) and nonsteroidal anti-inflammatory drugs (NSAIDs) (e.g., aspirin; ibuprofen, Advil, Motrin, Nuprin; naproxen sodium, Aleve). - A medicine (anticonvulsant) may be prescribed temporarily to prevent seizures. Common anticonvulsants include Dilantin (phenytoin), Tegretol (carbamazepine), and Neurontin (gabapentin). Some patients develop side effects (e.g., drowsiness, balance problems, rashes) caused by these anticonvulsants; in these cases, blood samples are taken to monitor the drug levels and manage the side effects. - Do not drive after surgery until discussed with your surgeon and avoid sitting for long periods of time. - Do not lift anything heavier than 5 pounds (e.g., 2-liter bottle of soda), including children. - Housework and yardwork are not permitted until the first follow-up office visit. This includes gardening, mowing, vacuuming, ironing, and loading/unloading the dishwasher, washer, or dryer. - Do not drink alcoholic beverages. - Gradually return to your normal activities. Fatigue is common. - An early exercise program to gently stretch the neck and back may be advised. - Walking is encouraged; start with short walks and gradually increase the distance. Wait to participate in other forms of exercise until discussed with your surgeon. - You may shower and shampoo 3 to 4 days after surgery unless otherwise directed by your surgeon. - Sutures or staples, which remain in place when you go home, will need to be removed 7 to 14 days after surgery. Ask your surgeon or call the office to find out when. When to Call Your Doctor If you experience any of the following: - A temperature that exceeds 101º F - An incision that shows signs of infection, such as redness, swelling, pain, or drainage. - If you are taking an anticonvulsant, and notice drowsiness, balance problems, or rashes. - Decreased alertness, increased drowsiness, weakness of arms or legs, increased headaches, vomiting, or severe neck pain that prevents lowering your chin toward the chest. Patients usually can resume their normal activities after 3 to 4 weeks. However, you cannot drive an automobile until you have approval from your neurologist. Doctors usually recommend that surgical patients stay on AEDs for up to two years after the operation. Some people may have to continue with medication indefinitely for seizure control. If language or memory problems continue past the recovery period, your doctor may recommend speech or physical therapy. What are the risks? No surgery is without risks. General complications of any surgery include bleeding, infection, blood clots, and reactions to anesthesia. Specific complications related to a craniotomy may include: - swelling of the brain, which may require a second craniotomy - nerve damage, which may cause muscle paralysis or weakness - CSF leak, which may require repair - loss of mental functions - permanent brain damage with associated disabilities Specific complications may include: - Memory and language problems after temporal lobectomy. - Temporary double vision after temporal lobectomy. - Increased number of seizures after corpus callosotomy, but the seizures should be less severe. - Reduced visual field after a hemispherectomy. - Partial, one-sided paralysis after a hemispherectomy. Intense rehabilitation often brings back nearly normal abilities. Sources & links If you have more questions, please contact the Mayfield Clinic at 800-325-7787 or 513-221-1100. For information about the University of Cincinnati Neuroscience Institute’s Epilepsy Center, call 866-941-8264. - Wiebe S, Blume WT, Girvin JP, Eliasziw M: A randomized, controlled trial of surgery for temporal-lobe epilepsy. N Engl J Med 345:311-8, 2001 - Yasuda CL, Tedeschi H, Oliveira EL, et al: Comparison of short-term outcome between surgical and clinical treatment in temporal lobe epilepsy: a prospective study. Seizure 15(1):35-40, 2006. - Engel J Jr, Wiebe S, French J, Sperling M, et al: Practice parameter: temporal lobe and localized neocortical resections for epilepsy. Epilepsia 44(6):741-751, 2003. - Sperling MR, O'Connor MJ, Saykin: Temporal lobectomy for refractory epilepsy. JAMA 276(6):470-475, 1996. - Dupont S, Tanguy ML, Clemenceau S, et al: Long-term prognosis and psychosocial outcomes after surgery for MTLE. Epilepsia 47(12):2115-24, 2006. - Schachter SC: Vagus nerve stimulation therapy summary. Neurology 59:S15-20, 2002. antiepileptic drug (AED): a medication used to control epileptic seizures. cortical mapping: direct brain recording or stimulation to identify language, motor, and sensory areas of the cortex. cortex: the outer layer of the brain containing nerve cell bodies. disconnection syndrome: the interruption of information transferred from one brain region to another. generalized seizure: a seizure involving the entire brain. hippocampal atrophy: a wasting or decrease in size of the hippocampus. hypermetabolism: faster than normal metabolism. hypometabolism: slower than normal metabolism. ictal: that which happens during a seizure. interictal: that which happens between seizures. intractable: difficult to control. localization: finding the location in the brain where epileptic seizures start. lobectomy: surgical removal of a lobe of the brain. seizure focus: a specific area of the brain where seizures begin. palliative: to alleviate without curing. partial seizure: a seizure involving only a portion of the brain. video EEG monitoring: simultaneous monitoring of a patient’s behavior with a video camera and the patient’s brain activity by EEG reviewed by: Ellen Air, MD, PhD, David Ficker, MD Mayfield Certified Health Info materials are written and developed by the Mayfield Clinic & Spine Institute in association with the University of Cincinnati Neuroscience Institute. This information is not intended to replace the medical advice of your health care provider.
http://www.mayfieldclinic.com/PE-EpilepsySurg.htm
4.03125
History of Baden-Württemberg In the 1st century AD, Württemberg was occupied by the Romans, who defended their control of the territory by constructing a limes (fortified boundary zone). Early in the 3rd century, the Alemanni drove the Romans beyond the Rhine and the Danube, but they in turn succumbed to the Franks under Clovis I, the decisive battle taking place in 496. The area later became part of the Holy Roman Empire. The history of Baden as a state began in the 12th century, as a fief of the Holy Roman Empire. As a fairly inconsequential margraviate that was divided between various branches of the ruling family for much of its history, it gained both status and territory during the Napoleonic era, when it was also raised to the status of grand duchy. In 1871, it became one of the founder states of the German Empire. The monarchy came to an end with the end of the First World War, but Baden itself continued in existence as a state of Germany until the end of the Second World War. Württemberg, often spelled "Wirtemberg" or "Wurtemberg" in English, developed as a political entity in southwest Germany, with the core established around Stuttgart by Count Conrad (died 1110). His descendants expanded Württemberg while surviving Germany's religious wars, changes in imperial policy, and invasions from France. The state had a basic parliamentary system that changed to absolutism in the 18th century. Recognised as a kingdom in 1806–1918, its territory now forms part of the modern German state of Baden-Württemberg, one of the 16 states of Germany, a relatively young federal state that has only existed since 1952. The coat of arms represents the state's several historical component parts, of which Baden and Württemberg are the most important. - 1 Celts, Romans and Alemani - 2 Duchy of Swabia - 3 Hohenstaufen, Welf and Zähringen - 4 Further Austria and the Palatinate - 5 Baden and Württemberg before the Reformation - 6 Reformation period - 7 Peasants' War - 8 Thirty Years' War - 9 Swabian Circle until the French Revolution - 10 Southwest Germany up to 1918 - 11 German southwest up to World War II - 12 Southwest Germany after the war - 13 State of Baden-Württemberg from 1952 to the present - 14 See also - 15 References Celts, Romans and Alemani The origin of the name "Württemberg" remains obscure. Scholars have universally rejected the once-popular derivation from "Wirth am Berg". Some authorities derive it from a proper name: "Wiruto" or "Wirtino," others from a Celtic place-name, "Virolunum" or "Verdunum". In any event, from serving as the name of a castle near the Stuttgart city district of Rotenberg, the name extended over the surrounding country and, as the lords of this district increased their possessions, so the name covered an ever-widening area, until it reached its present extent. Early forms included Wirtenberg, Wirtembenc and Wirtenberc. Wirtemberg was long accepted, and in the latter part of the 16th century Würtemberg and Wurttemberg appeared. In 1806, Württemberg became the official spelling, though Wurtemberg also appears frequently and occurs sometimes in official documents, and even on coins issued after that date. Württemberg's first known inhabitants, the Celts, preceded the arrival of the Suebi. In the first century AD, the Romans conquered the land and defended their position there by constructing a rampart (limes). Early in the third century, the Alemanni drove the Romans beyond the Rhine and the Danube, but they in turn succumbed to the Franks under Clovis, the decisive battle taking place in 496. For about 400 years, the district was part of the Frankish empire and was administered by counts until it was subsumed in the ninth century by the German Duchy of Swabia. Duchy of Swabia The Duchy of Swabia is to a large degree comparable to the territory of the Alemanni. The Suevi (Sueben or Swabians) belonged to the tribe of the Alemanni, reshaped in the 3rd century. The name of Swabia is also derived from them. From the 9th century on, in place of the area designation "Alemania," came the name "Schwaben" (Swabia). Swabia was one of the five stem duchies of the medieval Kingdom of the East Franks, and its dukes were thus among the most powerful magnates of Germany. The most notable family to hold Swabia were the Hohenstaufen, who held it, with a brief interruption, from 1079 until 1268. For much of this period, the Hohenstaufen were also Holy Roman Emperors. With the death of Conradin, the last Hohenstaufen duke, the duchy itself disintegrated although King Rudolf I attempted to revive it for his Habsburg family in the late 13th century. With the decline of East Francia power, the House of Zähringen appeared to be ready as the local successor of the power in southwestern Germany and in the northwest in the Kingdom of Arles. With the founding of the city of Bern in 1191, Berthold V, Duke of Zähringen, shows one of the House of Zähringen power centers. East of the Jura Mountains and west of the Reuss was described as Upper Burgundy, and Bern was part of the Landgraviate of Burgundy, which was situated on both sides of the Aar, between Thun and Solothurn. However, Berthold died without an heir. Bern was declared a Free imperial city by Frederick II, Holy Roman Emperor, in 1218. Berthold's death without heirs meant the complete disintegration of southwest Germany and led to the development of the Old Swiss Confederacy and the Duchy of Burgundy. Bern joined Switzerland in the year 1353. Swabia takes its name from the tribe of the Suebi, and the name was often used interchangeably with Alemannia during the existence of the stem-duchy in the High Middle Ages. Even Alsace belonged to it. Swabia was otherwise of great importance in securing the pass route to Italy. After the fall of the Staufers there was never again a Duchy of Swabia. The Habsburgs and the Württembergers endeavored in vain to resurrect it. Hohenstaufen, Welf and Zähringen Three of the noble families of the southwest attained a special importance: the Hohenstaufen, the Welf and the Zähringen. The most successful appear from the view of that time to be the Hohenstaufen, who, as dukes of Swabia from 1079 and as Frankish kings and emperors from 1138 to 1268, attained the greatest influence in Swabia. During the Middle Ages, various counts ruled the territory that now forms Baden, among whom the counts and duchy of Zähringen figure prominently. In 1112, Hermann, son of Hermann, Margrave of Verona (died 1074) and grandson of Berthold II, Duke of Carinthia and the Count of Zähringen, having inherited some of the German estates of his family, called himself Margrave of Baden. The separate history of Baden dates from this time. Hermann appears to have called himself "margrave" rather than "count," because of the family connection to the margrave of Verona. His son and grandson, both called Hermann, added to their territories, which were then divided, and the lines of Baden-Baden and Baden-Hochberg were founded, the latter of which divided about a century later into Baden-Hochberg and Baden-Sausenberg. The family of Baden-Baden was very successful in increasing the area of its holdings. The Hohenstaufen family controlled the duchy of Swabia until the death of Conradin in 1268, when a considerable part of its lands fell to the representative of a family first mentioned in about 1080, the count of Württemberg, Conrad von Beutelsbach, who took the name from his ancestral castle of Württemberg. The earliest historical details of a Count of Württemberg relate to one Ulrich I, Count of Württemberg, who ruled from 1241 to 1265. He served as marshal of Swabia and advocate of the town of Ulm, had large possessions in the valleys of the Neckar and the Rems, and acquired Urach in 1260. Under his sons, Ulrich II and Eberhard I, and their successors, the power of the family grew steadily. The charcoal-burner gave him some of his treasure, and was elevated to Duke of Zähringen. To the Zähringer sphere of influence originally belonged Freiburg and Offenburg, Rottweil and Villingen, and, in modern Switzerland, Zürich and Bern. The three prominent noble families were in vigorous competition with one another, even though they were linked by kinship. The mother of the Stauffer King Friedrich Barbarossa (Red beard) was Judith Welfen. The Staufers, as well as the Zähringers, based their claims of rule on ties with the family of the Frankish kings from the House of Salier. Further Austria and the Palatinate Other than the Margraviate of Baden and the Duchy of Württemberg, Further Austria and the Palatinate lay on the edge of the southwestern area. Further Austria (in German: Vorderösterreich or die Vorlande) was the collective name for the old possessions of the Habsburgs in south-western Germany (Swabia), the Alsace, and in Vorarlberg after the focus of the Habsburgs had moved to Austria. Further Austria comprised the Sundgau (southern Alsace) and the Breisgau east of the Rhine (including Freiburg im Breisgau after 1386) and included some scattered territories throughout Swabia, the largest being the margravate Burgau in the area of Augsburg and Ulm. Some territories in Vorarlberg that belonged to the Habsburgs were also considered part of Further Austria. The original homelands of the Habsburgs, the Aargau and much of the other original Habsburg possessions south of the Rhine and Lake Constance were lost in the 14th century to the expanding Old Swiss Confederacy after the battles of Morgarten (1315) and Sempach (1386) and were never considered part of Further Austria, except the Fricktal, which remained a Habsburg property until 1805. The Palatinate arose as the County Palatine of the Rhine, a large feudal state lying on both banks of the Rhine, which seems to have come into existence in the 10th century. The territory fell to the Wittelsbach Dukes of Bavaria in the early 13th century, and during a later division of territory among one of the heirs of Duke Louis II of Upper Bavaria in 1294, the elder branch of the Wittelsbachs came into possession not only of the Rhenish Palatinate, but also of that part of Upper Bavaria itself which was north of the Danube, and which came to be called the Upper Palatinate (Oberpfalz), in contrast to the Lower Palatinate along the Rhine. In the Golden Bull of 1356, the Palatinate was made one of the secular electorates, and given the hereditary offices of Archsteward of the Empire and Imperial Vicar of the western half of Germany. From this time forth, the Count Palatine of the Rhine was usually known as the Elector Palatine. Due to the practice of division of territories among different branches of the family, by the early 16th century junior lines of the Palatine Wittelsbachs came to rule in Simmern, Kaiserslautern, and Zweibrücken in the Lower Palatinate, and in Neuburg and Sulzbach in the Upper Palatinate. The Elector Palatine, now based in Heidelberg, converted to Lutheranism in the 1530s. When the senior branch of the family died out in 1559, the Electorate passed to Frederick III of Simmern, a staunch Calvinist, and the Palatinate became one of the major centers of Calvinism in Europe, supporting Calvinist rebellions in both the Netherlands and France. Frederick III's grandson, Frederick IV, and his adviser, Christian of Anhalt, founded the Evangelical Union of Protestant states in 1608, and in 1619 Elector Frederick V (the son-in-law of King James I of England) accepted the throne of Bohemia from rebellious Protestant noblemen. He was soon defeated by the forces of Emperor Ferdinand II at the Battle of White Mountain in 1620, and Spanish and Bavarian troops soon occupied the Palatinate itself. In 1623, Frederick was put under the ban of the Empire, and his territories and Electoral dignity granted to the Duke (now Elector) of Bavaria, Maximilian I. At the Treaty of Westphalia in 1648, the Sundgau became part of France, and in the 18th century, the Habsburgs acquired a few minor new territories in southern Germany such as Tettnang. In the Peace of Pressburg of 1805, Further Austria was dissolved and the formerly Habsburg territories were assigned to Bavaria, Baden, and Württemberg, and the Fricktal to Switzerland. By the Peace of Westphalia in 1648, Frederick V's son, Charles Louis, was restored to the Lower Palatinate, and given a new electoral title, but the Upper Palatinate and the senior electoral title remained with the Bavarian line. In 1685, the Simmern line died out, and the Palatinate was inherited by the Count Palatine of Neuburg (who was also Duke of Jülich and Berg), a Catholic. The Neuburg line, which moved the capital to Mannheim, lasted until 1742, when it, too, became extinct, and the Palatinate was inherited by the Duke Karl Theodor of Sulzbach. The childless Karl Theodor also inherited Bavaria when its electoral line became extinct in 1777, and all the Wittelsbach lands save Zweibrücken on the French border (whose Duke was, in fact, Karl Theodor's presumptive heir) were now under a single ruler. The Palatinate was destroyed in the Wars of the French Revolution – first its left bank territories were occupied, and then annexed, by France starting in 1795, and then, in 1803, its right bank territories were taken by the Margrave of Baden. The provincial government in Alsace was alternately administered by the Palatinate (1408–1504, 1530–1558) and by the Habsburgs (13th and 14th centuries, 1504–1530). Only the margraves of Baden and the counts and dukes of Württemberg included both homelands within their territories. With the political reordering of the southwest after 1800, Further Austria and the Electorate Palatine disappeared from history. Baden and Württemberg before the Reformation The lords of Württemberg were first named in 1092. Supposedly a Lord of Virdeberg by Luxembourg had married an heiress of the lords of Beutelsbach. The new Wirtemberg Castle (castle chapel dedicated in 1083) was the central point of a rule that extended from the Neckar and Rems valleys in all directions over the centuries. The family of Baden-Baden was very successful in increasing the area of its holdings, which after several divisions were united by the margrave Bernard I in 1391. Bernard, a soldier of some renown, continued the work of his predecessors and obtained other districts, including Baden-Hochberg, the ruling family of which died out in 1418. During the 15th century, a war with the Count Palatine of the Rhine deprived the Margrave Charles I (died 1475) of a part of his territories, but these losses were more than recovered by his son and successor, Christoph I of Baden (illustration, right). In 1503, the family Baden-Sausenberg became extinct, and the whole of Baden was united by Christophe. Under his sons, Ulrich II and Eberhard I, and their successors, the power of the family grew steadily. Eberhard I (died 1325) opposed, sometimes successfully, three Holy Roman emperors. He doubled the area of his county and transferred his residence from Württemberg Castle to the "Old Castle" in today's city centre of Stuttgart. His successors were not as prominent, but all added something to the land area of Württemberg. In 1381, the Duchy of Teck was bought, and marriage to an heiress added Montbéliard in 1397. The family divided its lands amongst collateral branches several times but, in 1482, the Treaty of Münsingen reunited the territory, declared it indivisible, and united it under Count Eberhard V, called im Bart (The Bearded). This arrangement received the sanction of the Holy Roman Emperor, Maximilian I, and of the Imperial Diet, in 1495. Eberhard V proved one of the most energetic rulers that Württemberg ever had, and, in 1495, his county became a duchy. Eberhard was now Duke Eberhard I, Duke of Württemberg. Württemberg, after the partition from 1442 to 1482, had no further land partitions to endure and remained a relatively closed country. In Baden, however, a partitioning occurred that lasted from 1515 to 1771. Moreover, the various parts of Baden were always physically separated one from the other. Martin Luther's theses and his writings left no one in Germany untouched after 1517. In 1503, the family Baden-Sausenberg became extinct, and the whole of Baden was united by Christoph, who, before his death in 1527, divided it among his three sons. Religious differences increased the family's rivalry. During the period of the Reformation some of the rulers of Baden remained Catholic and some became Protestants. One of Christoph's sons died childless in 1533. In 1535, his remaining sons Bernard and Ernest, having shared their brother's territories, made a fresh division and founded the lines of Baden-Baden and Baden-Pforzheim, called Baden-Durlach after 1565. Further divisions followed, and the weakness caused by these partitions was accentuated by a rivalry between the two main branches of the family, culminating in open warfare. The long reign (1498–1550) of Duke Ulrich, who succeeded to the duchy while still a child, proved a most eventful period for the country, and many traditions cluster round the name of this gifted, unscrupulous and ambitious man. Duke Ulrich of Württemberg had been living in his County of Mömpelgard since 1519. He had been exiled from his duchy by his own fault and controversial encroachments into non-Württembergish possessions. In Basel, Duke Ulrich came into contact with the Reformation. Aided by Philip, landgrave of Hesse, and other Protestant princes, he fought a victorious battle against Ferdinand's troops at Lauffen in May 1534. Then, by the treaty of Cadan, he again became duke, but perforce duke of the duchy as an Austrian fief. He subsequently introduced the reformed religious doctrines, endowed Protestant churches and schools throughout his land, and founded the Tübinger Stift seminary in 1536. Ulrich's connection with the Schmalkaldic League led to another expulsion but, in 1547, Charles V reinstated him, albeit on somewhat onerous terms. The total population during the 16th century was between 300,000 and 400,000. Ulrich's son and successor, Christoph (1515–1568), completed the work of converting his subjects to the reformed faith. He introduced a system of church government, the Grosse Kirchenordnung, which endured in part into the 20th century. In this reign, a standing commission started to superintend the finances, and the members of this body, all of whom belonged to the upper classes, gained considerable power in the state, mainly at the expense of the towns. Christopher's son Louis, the founder of the Collegium illustre in Tübingen, died childless in 1593. A kinsman, Frederick I (1557–1608) succeeded to the duchy. This energetic prince disregarded the limits placed on his authority by the rudimentary constitution. By paying a large sum of money, he induced the emperor Rudolph II in 1599 to free the duchy from the suzerainty of Austria. Austria still controlled large areas around the duchy, known as "Further Austria". Thus, once again, Württemberg became a direct fief of the empire, securing its independence. Even the Margraviate of Baden-Baden went over to Lutheranism that same year, but indeed only for a short time. Likewise, after the Peace of Augsburg the Reformation was carried out in the County of Hohenlohe. At the same time, however, the Counter-Reformation began. It was persistently supported by the Emperor and the clerical princes. The living conditions of the peasants in the German southwest at the beginning of the 16th century were quite modest, but an increase in taxes and several bad harvests, with no improvement in sight, led to crisis. Under the sign of the sandal (Bundschuh), that is, the farmer's shoe that tied up with laces, rebellions broke out on the Upper Rhine, in the bishopric of Speyer, in the Black Forest and in the upper Neckar valley at the end of the 15th century. The extortions by which he sought to raise money for his extravagant pleasures excited an uprising known as the arme Konrad (Poor Conrad), not unlike the rebellion in England led by Wat Tyler. The authorities soon restored order, and, in 1514, by the Treaty of Tübingen, the people undertook to pay the duke's debts in return for various political privileges, which in effect laid the foundation of the constitutional liberties of the country. A few years later, Ulrich quarrelled with the Swabian League, and its forces (helped by William IV, Duke of Bavaria, angered by the treatment meted out by Ulrich to his wife Sabina, a Bavarian princess), invaded Württemberg, expelled the duke and sold his duchy to Charles V, Holy Roman Emperor, for 220,000 gulden. Charles handed Württemberg over to his brother, the Holy Roman Emperor Ferdinand I, who served as nominal ruler for a few years. Soon, however, the discontent caused by the oppressive Austrian rule, the disturbances in Germany leading to the German Peasants' War and the commotions aroused by the Reformation gave Ulrich an opportunity to recover his duchy. Thus Marx Sittich of Hohenems went against the Hegenau and Klettgau rebels. On 4 November 1525 he struck down a last attempt by the peasants in that same countryside where the peasants' unrest had begun a year before. Emperor Karl V and even Pope Clement VII thanked the Swabian Union for its restraint in the Peasants' War. Thirty Years' War The longest war in German history became, with the intervention of major powers, a global war. The cause was mainly the conflict of religious denominations as a result of the Reformation. Thus, in the southwest of the empire, Catholic and Protestant princes faced one another as enemies – the Catholics (Emperor, Bavaria) united in the League, and the Protestants (Electorate Palatine, Baden-Durlach, Württemberg) in the Union. Unlike his predecessor, the next duke, Johann Frederick (1582–1628), failed to become an absolute ruler, and perforce recognised the checks on his power. During his reign, which ended in July 1628, Württemberg suffered severely from the Thirty Years' War although the duke himself took no part in it. His son and successor Eberhard III (1628–1674), however, plunged into it as an ally of France and Sweden as soon as he came of age in 1633, but after the battle of Nordlingen in 1634, Imperial troops occupied the duchy and the duke himself went into exile for some years. The Peace of Westphalia restored him, but to a depopulated and impoverished country, and he spent his remaining years in efforts to repair the disasters of the lengthy war. Württemberg was a central battlefield of the war. Its population fell by 57% between 1634 and 1655, primarily because of death and disease, declining birthrates, and the mass migration of terrified peasants. From 1584 to 1622, Baden-Baden was in the possession of one of the princes of Baden-Durlach. The house was similarly divided during the Thirty Years' War. Baden suffered severely during this struggle, and both branches of the family were exiled in turn. The Peace of Westphalia in 1648 restored the status quo, and the family rivalry gradually died out. For one part of the southwest, a peace of 150 years began. On the Middle Neckar, in the whole Upper Rhine area and especially in the Electorate Palatine, the wars waged by the French King Louis XIV from 1674 to 1714 caused further terrible destruction. The Kingdom of France penetrated through acquired possessions in Alsace to the Rhine border. Switzerland separated from the Holy Roman Empire. Swabian Circle until the French Revolution The dukedom survived mainly because it was larger than its immediate neighbours. However, it was often under pressure during the Reformation from the Catholic Holy Roman Empire, and from repeated French invasions in the 17th and 18th centuries. Württemberg happened to be in the path of French and Austrian armies engaged in the long rivalry between the Bourbon and Habsburg dynasties. During the wars of the reign of Louis XIV of France, the margravate was ravaged by French troops and the towns of Pforzheim, Durlach, and Baden were destroyed. Louis William, Margrave of Baden-Baden (died 1707), figured prominently among the soldiers who resisted the aggressions of France. It was the life's work of Charles Frederick of Baden-Durlach to give territorial unity to his country. Beginning his reign in 1738, and coming of age in 1746, this prince is the most notable of the rulers of Baden. He was interested in the development of agriculture and commerce, sought to improve education and the administration of justice, and proved in general to be a wise and liberal ruler in the Age of Enlightenment. In 1771, Augustus George of Baden-Baden died without sons, and his territories passed to Charles Frederick, who thus finally became ruler of the whole of Baden. Although Baden was united under a single ruler, the territory was not united in its customs and tolls, tax structure, laws or government. Baden did not form a compact territory. Rather, a number of separate districts lay on both banks of the upper Rhine. His opportunity for territorial aggrandisement came during the Napoleonic wars. During the reign of Eberhard Louis (1676–1733), who succeeded as a one-year-old when his father Duke William Louis died in 1677, Württemberg had to face another destructive enemy, Louis XIV of France. In 1688, 1703 and 1707, the French entered the duchy and inflicted brutalities and suffering upon the inhabitants. The sparsely populated country afforded a welcome to fugitive Waldenses, who did something to restore it to prosperity, but the extravagance of the duke, anxious to provide for the expensive tastes of his mistress, Christiana Wilhelmina von Grävenitz, undermined this benefit. Charles Alexander, who became duke in 1733, had become a Roman Catholic while an officer in the Austrian service. His favourite adviser was the Jew Joseph Süß Oppenheimer, and suspicions arose that master and servant were aiming at the suppression of the diet (the local parliament) and the introduction of Roman Catholicism. However, the sudden death of Charles Alexander in March 1737 put an abrupt end to any such plans, and the regent, Carl Rudolf, Duke of Württemberg-Neuenstadt, had Oppenheimer hanged. Charles Eugene (1728–1793), who came of age in 1744, appeared gifted, but proved to be vicious and extravagant, and he soon fell into the hands of unworthy favourites. He spent a great deal of money in building the "New Castle" in Stuttgart and elsewhere, and sided against Prussia during the Seven Years' War of 1756–1763, which was unpopular with his Protestant subjects. His whole reign featured dissension between ruler and ruled, the duke's irregular and arbitrary methods of raising money arousing great discontent. The intervention of the emperor and even of foreign powers ensued and, in 1770, a formal arrangement removed some of the grievances of the people. Charles Eugene did not keep his promises, but later, in his old age, he made a few further concessions. Charles Eugene left no legitimate heirs, and was succeeded by his brother, Louis Eugene (died 1795), who was childless, and then by another brother, Frederick Eugene (died 1797). This latter prince, who had served in the army of Frederick the Great, to whom he was related by marriage, and then managed his family's estates around Montbéliard, educated his children in the Protestant faith as francophones. All of the subsequent Württemberg royal family were descended from him. Thus, when his son Frederick II became duke in 1797, Protestantism returned to the ducal household, and the royal house adhered to this faith thereafter. Nevertheless, the district legislatures as well as the imperial diets offered a possibility of regulating matters in dispute. Much was left over from the trials before the imperial courts, which often lasted decades. Southwest Germany up to 1918 In the wars after the French Revolution in 1789, Napoleon, the emperor of the French, rose to be the ruler of the European continent. An enduring result of his policy was a new order of the southwestern German political world. When the French Revolution threatened to be exported throughout Europe in 1792, Baden joined forces against France, and its countryside was devastated once more. In 1796, the margrave was compelled to pay an indemnity and to cede his territories on the left bank of the Rhine to France. Fortune, however, soon returned to his side. In 1803, largely owing to the good offices of Alexander I, emperor of Russia, he received the Bishopric of Konstanz, part of the Rhenish Palatinate, and other smaller districts, together with the dignity of a prince-elector. Changing sides in 1805, he fought for Napoleon, with the result that, by the peace of Pressburg in that year, he obtained the Breisgau and other territories at the expense of the Habsburgs (see Further Austria). In 1806, he joined the Confederation of the Rhine, declared himself a sovereign prince, became a grand duke, and received additional territory. On January 1, 1806, Duke Frederick II assumed the title of King Frederick I, abrogated the constitution, and united old and new Württemberg. Subsequently, he placed church lands under the control of the state and received some formerly self-governing areas under the "mediatisation" process. In 1806, he joined the Confederation of the Rhine and received further additions of territory containing 160,000 inhabitants. A little later, by the peace of Vienna in October 1809, about 110,000 more persons came under his rule. In return for these favours, Frederick joined Napoleon Bonaparte in his campaigns against Prussia, Austria and Russia, and of 16,000 of his subjects who marched to Moscow only a few hundred returned. Then, after the Battle of Leipzig in October 1813, King Frederick deserted the waning fortunes of the French emperor and, by a treaty made with Metternich at Fulda in November 1813, he secured the confirmation of his royal title and of his recent acquisitions of territory, while his troops marched with those of the allies into France. In 1815, the king joined the German Confederation, but the Congress of Vienna made no change in the extent of his lands. In the same year, he laid before the representatives of his people the outline of a new constitution, but they rejected it and, in the midst of the commotion. Frederick died on October 30, 1816. The new king, William I (reigned 1816–1864), at once took up the constitutional question and, after much discussion, granted a new constitution in September 1819. This constitution, with subsequent modifications, remained in force until 1918 (see Württemberg). A period of quiet now set in, and the condition of the kingdom, its education, agriculture trade and manufactures, began to receive earnest attention, while by frugality, both in public and in private matters, King William I helped to repair the shattered finances of the country. But the desire for greater political freedom did not entirely fade away under the constitution of 1819 and, after 1830, a certain amount of unrest occurred. This, however, soon died, while the inclusion of Württemberg in the German Zollverein and the construction of railways fostered trade. The revolutionary movement of 1848 did not leave Württemberg untouched, although no actual violence took place within the kingdom. King William had to dismiss Johannes Schlayer (1792–1860) and his other ministers, calling to power men with more liberal ideas and the exponents of the idea of a united Germany. King William did proclaim a democratic constitution but, as soon as the movement had spent its force, he dismissed the liberal ministers. In October 1849, Schlayer and his associates returned to power. In Baden, by contrast, there was a serious uprising that had to be put down by force. By interfering with popular electoral rights, the king and his ministers succeeded in assembling a servile diet in 1851, surrendering all the privileges gained since 1848. In this way, the authorities restored the constitution of 1819, and power passed into the hands of a bureaucracy. A concordat with the Papacy proved almost the last act of William's long reign, but the diet repudiated the agreement, preferring to regulate relations between church and state in its own way. In July 1864, Charles (1823–1891, reigned 1864–91) succeeded his father William I as king. Almost at once, he was faced with considerable difficulties. In the duel between Austria and Prussia for supremacy in Germany, William I had consistently taken the Austrian side, and this policy was equally acceptable to the new king and his advisers. In 1866, Württemberg took up arms on behalf of Austria in the Austro-Prussian War, but three weeks after the Battle of Königgrätz on July 3, 1866, her troops suffered a comprehensive defeat at Tauberbischofsheim, and the country lay at the mercy of Prussia. The Prussians occupied the northern part of Württemberg and negotiated a peace in August 1866. By this, Württemberg paid an indemnity of 8,000,000 gulden, but she at once concluded a secret offensive and defensive treaty with her conqueror. Württemberg was a party to the Saint Petersburg Declaration of 1868. The end of the struggle against Prussia allowed a renewal of democratic agitation in Württemberg, but this achieved no tangible results when the great war between France and Prussia broke out in 1870. Although the policy of Württemberg had continued to be antagonistic to Prussia, the kingdom shared in the national enthusiasm which swept over Germany, and its troops took a creditable part in the Battle of Wörth and in other operations of the war. In 1871, Württemberg became a member of the new German Empire, but retained control of her own post office, telegraphs and railways. She had also certain special privileges with regard to taxation and the army and, for the next 10 years, Württemberg's policy enthusiastically supported the new order. Many important reforms, especially in the area of finance, ensued, but a proposal for a union of the railway system with that of the rest of Germany failed. After reductions in taxation in 1889, the reform of the constitution became the question of the hour. King Charles and his ministers wished to strengthen the conservative element in the chambers, but the laws of 1874, 1876 and 1879 only effected slight reforms pending a more thorough settlement. On 6 October 1891, King Charles died suddenly. His cousin William II (1848–1921, reigned 1891–1918) succeeded and continued the policy of his predecessor. Discussions on the reform of the constitution continued, and the election of 1895 memorably returned a powerful party of democrats. King William had no sons, nor had his only Protestant kinsman, Duke Nicholas (1833–1903). Consequently, the succession would ultimately pass to a Roman Catholic branch of the family, and this prospect raised certain difficulties about the relations between church and state. The heir to the throne in 1910 was the Roman Catholic Duke Albert (b. 1865). Between 1900 and 1910, the political history of Württemberg centred round the settlement of the constitutional and the educational questions. The constitution underwent revision in 1906, and a settlement of the education difficulty occurred in 1909. In 1904, the railway system integrated with that of the rest of Germany. The population in 1905 was 2,302,179, of whom 69% were Protestant, 30% Catholic and 0.5% Jewish. Protestants largely preponderated in the Neckar district, and Roman Catholics in that of the Danube. In 1910, an estimated 506,061 people worked in the agricultural sector, 432,114 in industrial occupations and 100,109 in trade and commerce. (see Demographics of Württemberg) In the confusion at the end of World War I, Frederick abdicated on 22 November 1918. A republic had already been declared on 14 November. In the course of the revolutionary activities at the close of World War I in November 1918, King William II abdicated on November 30, and a republican government ensued. Württemberg became a state (Land) in the new Weimar Republic. Baden named itself a "democratic republic," Württemberg a "free popular state." Instead of monarchs, state presidents were in charge. They were elected by the state legislatures, in Baden by an annual change, in Württemberg after each legislative election. German southwest up to World War II Politics between 1918 and 1919 towards a merger of Württemberg and Baden remained largely unsuccessful. After the excitements of the 1918–1919 revolution, its five election results between 1919 and 1932 show a decreasing vote for left-wing parties. After the seizure of power by the National Socialist German Workers Party (NSDAP) in the year 1933, the state borders initially remained unchanged. The state of Baden, the state of Württemberg and the Hohenzollern states (the government district of Sigmaringen) continued to exist, albeit with much less autonomy with regard to the empire. From 1934, the Gau of Württemberg-Hohenzollern added the Province of Hohenzollern. By 30 April 1945, all of Baden, Württemberg and Hohenzollern were completely occupied. Southwest Germany after the war After World War II was over, the states of Baden and Württemberg were split between the American occupation zone in the north and the French occupation zone in the south, which also got Hohenzollern. The border between the occupation zones followed the district borders, but they were drawn purposely in such a way that the autobahn from Karlsruhe to Munich (today the Bundesautobahn 8) ended up inside the American occupation zone. In the American occupation zone, the state of Württemberg-Baden was founded; in the French occupation zone, the southern part of former Baden became the new state of Baden while the southern part of Württemberg and Hohenzollern were fused into Württemberg-Hohenzollern. Article 29 of the Basic Law of Germany provided for a way to change the German states via a community vote; however, it could not enter into force due to a veto by the Allied forces. Instead, a separate article 118 mandated the fusion of the three states in the southwest via a trilateral agreement. If the three affected states failed to agree, federal law would have to regulate the future of the three states. This article was based on the results of a conference of the German states held in 1948, where the creation of a Southwest State was agreed upon. The alternative, generally favored in South Baden, was to recreate Baden and Württemberg (including Hohenzollern) in its old, pre-war borders. The trilateral agreement failed because the states couldn't agree on the voting system. As such, federal law decided on May 4, 1951 that the area be split into four electoral districts: North Württemberg, South Württemberg, North Baden and South Baden. Because it was clear that both districts in Württemberg as well as North Baden would support the merger, the voting system favored the supporters of the new Southwest State. The state of Baden brought the law to the German Constitutional Court to have it declared as unconstitutional, but failed. The plebiscite took place on December 9, 1951. In both parts of Württemberg, 93% were in favor of the merger, in North Baden 57% were in favor, but in South Baden only 38% were. Because three of four electoral districts voted in favor of the new Southwest State, the merger was decided upon. Had Baden as a whole formed a single electoral district, the vote would have failed. State of Baden-Württemberg from 1952 to the present The members of the constitutional convention were elected on March 9, 1952, and on April 25 the Prime Minister was elected. With this, the new state of Baden-Württemberg was founded. After the constitution of the new state entered force, the members of the constitutional convention formed the state parliament until the first election in 1956. The name Baden-Württemberg was only intended as a temporary name, but ended up the official name of the state because no other name could be agreed upon. In May 1954, the Baden-Württemberg landtag (legislature) decided on adoption of the following coat of arms: three black lions on a golden shield, framed by a deer and a griffin. This coat of arms once belonged to the Staufen family, emperors of the Holy Roman Empire and Dukes of Swabia. The golden deer stands for Württemberg, the griffin for Baden. Conversely the former Württemberg counties of Calw, Freudenstadt, Horb, Rottweil and Tuttlingen were incorporated into the Baden governmental districts of Karlsruhe and Freiburg. The last traces of Hohenzollern disappeared. Between county and district, regional associations were formed that are responsible for overlapping planning. The opponents of the merger did not give up. After the General Treaty gave Germany full sovereignty, the opponents applied for a community vote to restore Baden to its old borders by virtue of paragraph 2 of Article 29 of the Basic Law, which allowed a community vote in states which had been changed after the war without a community vote. The Federal Ministry of the Interior refused the application on the grounds that a community vote had already taken place. The opponents sued in front of the German Constitutional Court and won in 1956, with the court deciding that the plebiscite of 1951 had not been a community vote as defined by the law because the more populous state of Württemberg had had an unfair advantage over the less populous state of Baden. Because the court did not set a date for the community vote, the government simply did nothing. The opponents eventually sued again in 1969, which led to the decision that the vote had to take place before June 30, 1970. On June 7, the majority voted against the proposal to restore the state of Baden. - Andrea Schulte-Peevers; Anthony Haywood, Sarah Johnstone, Jeremy Gray, Daniel (2007). Germany. Lonely Planet. ISBN 978-1-74059-988-7. Retrieved 1 February 2009. Cite uses deprecated parameter - "History of BW - The Duchy of Swabia". Retrieved 28 February 2015. - "History of BW - Staufer, Welfen, Zähringer". Retrieved 28 February 2015. - "History of BW - Anterior Austria and the Electorate of Palatinate". Retrieved 28 February 2015. - "History of BW - The Margraviate of Baden and the County of Württemberg at the beginning of the 15th century". Retrieved 28 February 2015. - This type of sovereign royal duke was known in Germany as a Herzog. - "History of BW - The time of the Reformation". Retrieved 28 February 2015. - "History of BW - The Peasants' War". Retrieved 28 February 2015. - "History of BW - The Peasants' War". Retrieved 28 February 2015. - "History of BW - The Thirty Years War". Retrieved 28 February 2015. - Peter Wilson, The Thirty Years' War: Europe's tragedy (2009) p 789 - "Historical Map of Baden-Wurttemberg 1789 - Southern Part". Retrieved 28 February 2015. - "History of BW - The German southwest at the end of the 18th century". Retrieved 28 February 2015. - "History of BW - Southwest Germany up to 1918". Retrieved 28 February 2015. - "DFR - BVerfGE 1, 14 - Südweststaat". Retrieved 28 February 2015. - "25. April 1952 - Die Entstehung des Landes Baden-Württemberg". Retrieved 28 February 2015. - "DFR - BVerfGE 5, 34 - Baden-Abstimmung". Retrieved 28 February 2015. - This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Württemberg". Encyclopædia Britannica (11th ed.). Cambridge University Press.
https://en.wikipedia.org/wiki/History_of_Baden
4.125
Until about 140 million years ago, dinosaurs had been munching their way through a uniformly green plant world. What happened then is one of evolution's greatest success stories, heralding a new kind of ecological relationship that would transform the planet: The first flowers appeared, competing for the attention of animals to visit them and distribute their pollen to other flowers to ensure the plant's propagation. The myriad of ways in which flowers attract pollinators have been studied since the beginning of biology, and few ecological relationships between organisms are as well understood as those between plants and their pollinators. Despite decades of research, a team led by Martin von Arx, a postdoctoral fellow in the lab of Goggy Davidowitz in the University of Arizona department of entomology, now has discovered a previously unknown sensory channel that is used in plant-animal interactions. The white-lined sphinx (Hyles lineata), the most common species of hawkmoth in North America, can detect minuscule differences in humidity when hovering near a flower that tells it if there is enough nectar inside to warrant a visit. The findings constitute the first documented case of a pollinator using humidity as a direct cue in its foraging behavior and are published in the journal Proceedings of the National Academy of Sciences. The study, "Floral humidity as a reliable sensory cue for profitability assessment by nectar-foraging hawkmoths," is co-authored by Davidowitz and Joaqun Goyret and Robert Raguso at the department of neurobiology and behavior at Cornell University in Ithaca, New York, where the work was carried out. "Traditionally, most research on plant-pollinator interactions has focused on static cues like floral scent, color or shape," von Arx said. "All this time, evaporation from nectar was right under our noses, but few people ever looked. We were able to show that the insects actually perceive this cue, and it allows them to directly assess the reward that they might get from the flower." Unlike previously recognized cues used by pollinators such as flower size, shape or color, which don't necessarily reveal anything about the actual nectar levels waiting inside, the humidity evaporating from the flower's nectar provides an "honest" signal to a potential visitor. Scent, for example, is independent of nectar, which is odorless in most plants, whereas the fragrance usually is produced by the petals. "We were always intrigued by this question," von Arx said. "Given that the known cues like flower shape and color are independent of the abundance of nectar, we were wondering if there is some other cue the insects might use. You would expect natural selection to favor an ability to sense a cue that is directly linked to the nectar reward." To a hawkmoth setting out at dusk to search for nectar-bearing flowers of one of its favorite plants, the tufted evening primrose (Oenothera cespitosa), being able to quickly tell whether a flower is worth visiting, can make the difference between life and death. Hovering in front of a flower while probing it with its long proboscis the moth's "tongue" is one of the most energetically costly modes of flight, von Arx explained. And once the insect plunges its head deep inside to reach all of the nectar, it is very vulnerable to predators such as bats. "The metabolic cost of hovering in hawkmoths is more than 100 times that of a moth at rest," said Davidowitz. "This is the most costly mode of locomotion ever measured. An individual hawkmoth may spend 5-10 seconds evaluating whether a flower has nectar, multiply that by hundreds of flowers visited a night, and the moth is expending a huge amount of energy searching for nectar that may not be there. The energy saved by avoiding such behavior can go into making more eggs. For a moth that lives only about a week, that is a very big deal." Add to that the "Black Friday" effect: fierce competition for limited supplies while they last. "Imagine: As soon as the sun sets, all the hawkmoths fly around flower patches in the desert," von Arx said. "These flowers open within minutes of each other, and as soon as they do, the moths go there. A big flower patch or a plant with multiple flowers might attract many moths at the same time, so it's very important for an individual to pick the most profitable one very quickly." The research group first measured humidity levels around a nectar-bearing flower by enclosing primrose plants in a sealed container and scanning the air inside with highly sensitive humidity measuring devices called hygrometers. They found that humidity just above the opening flower was slightly higher than ambient levels, caused partly by a plume of water vapor emanating from the flower's nectar tube. To study whether and how moths respond to the humidity evaporating from nectar stores, the research team put artificial flowers to exclude any other potential signal other than humidity levels in a flight cage large enough for the moths to fly about freely. Even though none of the artificial flowers had nectar, the moths would preferentially hover and extend their proboscis into those that had slightly elevated humidity compared with those that matched the humidity around them. The animals were able to sense if humidity near a flower was elevated as little as 4 percent above ambient humidity in the flight cage, despite of the turbulence generated by many moths hovering about. "It was really exciting to see their high sensitivity to humidity in that they can perceive such a minute amount of difference in such a dynamic environment," von Arx said. The results help researchers better understand the ecological relationships between flowers and their pollinators, especially in arid environments such as the Southwestern U.S. Even though most plant-pollinator relationships are mutually beneficial the plant rewarding the pollinator's help with food their interests are conflicting. "Speaking in evolutionary terms, the flower wants to be visited by a pollinator, but it doesn't want to invest too much because sacrificing resources and energy to make nectar is expensive," von Arx explained. "Often, plants are dishonest in their advertising, by presenting attractive flowers with no nectar." But under certain circumstances, especially in desert environments, where water is scarce, it is beneficial for a flower to be honest, the researchers believe. "If you're one of only a few flowers and there are lots of pollinators out there, you don't have to be honest about how much nectar you have because they'll visit anyway," von Arx said. "But if you want the attention of just a few, you really have to go all out. So by saying, 'Hey, come here, I have lots of nectar,' you're giving a faithful signal about an actual benefit that the pollinators can perceive and evaluate." "I think in this case we showed that honesty makes sense in this system, because plants pollinated by hawkmoths are often pollinator-limited, and this signal, especially in the desert environment, is very potent." According to von Arx, relative humidity plays an important role in the insect world and has been associated with choosing a suitable habitat but never was studied in the context of foraging for nectar. For example, neurobiological experiments revealed that cockroaches are able to detect humidity changes of a fraction of a percent. "As creatures who use vision and olfaction, humans think in odors and shape, and color," von Arx said. "We are biased by what we can perceive. We know that moths have hygroreceptors on the tips of their antennae, but they remain a mystery for the most part. We know a lot about olfactory receptors, mechanoreceptors and vision. The insect eye has been studied in and out. But hygroreception? We still don't really know how that actually works." |Contact: Daniel Stolte| University of Arizona
http://www.bio-medicine.org/biology-news-1/Got-nectar-3F-To-hawkmoths--humidity-is-a-cue-25177-1/
4.0625
You Are Here The first duty of love is to listen. — Paul Tillich To explore the experience of empathy is to understand more deeply the first Unitarian Universalist principle: the inherent worth and dignity of all people (and all beings.) The Merriam-Webster online dictionary's definition of "empathy" includes "the action of understanding, being aware of, being sensitive to, and vicariously experiencing the feelings, thoughts, and experience of another." Empathy is the necessary action behind love, forgiveness, compassion and caring, and the driving forces of most good works in our world. Cures for disease, laws protecting the vulnerable, charitable contributions and even wars fought to end brutality, are examples of the results of empathy. This session introduces empathy as a tool for discerning good and just action. It also guides children to recognize and respect multiple perspectives, and to understand that any given scenario can have multiple truths. In this session the children will hear a Scottish folk tale about a seal hunter who wounds a seal and then is given a chance to experience this wounding from the seal's perspective. Following the story the children will have further opportunities to look at situations from multiple perspectives. They will also participate in an exercise of empathetic listening with their peers to learn one of the basic skills of empathy that can be practiced on a daily basis. As Kevin Ryan and Karen Bohlin wrote in Building Character in Schools (San Francisco: Jossey-Bass Publishers, l999), "Such experiences (as gaining empathy through hearing the stories of others) encourage students to resolve in the quiet of their hearts to stand up for the threatened and the vulnerable." The Faith in Action component of this session offers an activity for practicing empathy, justice, and goodness by card- or letter-writing to protect seals that are being hunted now. A longer-term Faith in Action project brings an awareness and/or fundraising project to the larger congregational community. In this session, the children add "Empathy" to the Moral Compass poster. This session will: - Give participants an opportunity to share acts of goodness that they have done (or witnessed) - Provide a story and active experiences that demonstrate the meaning of the word "Empathy" and how empathy feels - Teach that an important part of acting out of goodness is to look at things from other perspectives besides one's own - Help participants learn to identify, respect and value the perspectives and experiences of others which differ from their own - Strengthen participants' connection to and sense of responsibility to their faith community - Take pride in sharing acts of goodness and justice they have done (or witnessed) in the "Gems of Goodness" project - Hear and act out a story about how someone learns to see things from another perspective. - Learn to listen and speak empathetically - Participate in clean-up together - Optional: Practice using empathy as they write cards or letters to advocate protection of seals from hunting
http://www.uua.org/re/tapestry/children/tales/session4/123223.shtml
4.03125
Functions of Gerunds Because gerunds are nouns, they can be used just as nouns are used. HOW ARE NOUNS USED? Gerunds are subjects Example: Skiing is a popular winter sport. Gerunds are objects Example: I love skiing. Gerunds are objects of prepositions Example: I can’t live without skiing! I’m interested in skiing. Let’s talk about skiing. I’m afraid of skiing. Functions of Gerunds Gerunds are also object complements He spends time reading. She found him working in the kitchen. Don’t waste your time studying this stuff. I caught the students cheating on the exam. Fahad found Abdulrahman smoking outside. Natsume and Paulina saw Mariana chatting with Rena and Celia. Modifying Gerunds Gerunds can be modified with possessives and negatives. Gerunds can be modified with a noun or object pronoun or the possessive noun or the possessive pronoun.I appreciate YOUR participating in the survey.Hilal doesn’t mind MY reading her letter.I thanked Nayef for HIS coming early to help me.Would you mind NOT BEING late tomorrow?I’m unhappy about the students’ interrupting the lecture. Modifying Gerunds Use the possessive noun or possessive pronoun with formal English. I’m unhappy with Paulina’s missing class. I’m unhappy with her missing class. I’m upset about your missing class. I’m disappointed with his missing class. Use the noun object or object pronoun with informal English. I’m unhappy with Paulina missing class. I’m unhappy with her missing class. I’m upset about you missing class. I’m disappointed with him missing class. Gerunds follow certain verbs as objects Enjoy, quit (give up), appreciate, mind, finish, stop (quit), avoid, postpone, delay, keep, keep on, consider, discuss, mention, suggest Examples Yichen doesn’t enjoy traveling. Fahad wants Abdulaziz to quit smoking. I appreciate the students’ coming on time to class. Would you mind opening the door for me? Would you mind helping me with these bags? You have to finish working by 5 p.m. tonight. Don’t avoid taking the test. Holly suggested my reviewing the material before taking the test. Keep moving! Don’t stop! I kept on speaking even after my mother hung up the phone on me. Would you consider staying for a while? Go + Gerund Did you go shopping? We went fishing yesterday. Have you ever gone camping? Do you like to go hunting? Let’s go skating this weekend. My kids and I love to go swimming. Special expressions followed by gerunds Have fun What do you have fun doing? Have a good time I had a good time hanging out with my friends. Spend time Many guys enjoy spending time playing video games. Waste time Let’s not waste time reviewing material that you already know. Have trouble Some people have trouble adapting to new situations. Special expressions followed by gerunds Have difficulty Are you having difficulty staying awake today? Have a hard time Is anyone having a hard time understanding what I’m saying here? Have a difficult time I hope that nobody has a difficult time passing level 5! Special expressions followed by gerunds Sit + expression of place+ gerundI was just sitting in my seat minding my own business when the teacher asked me to leave! Stand + expression of place + gerund I’m sorry I’m late. I was standing on line waiting for my name to be called. Active and Passive Gerunds Active Gerund: Inviting them to her wedding was a nice gesture on her part. Passive Gerund: Being invited to her wedding was a great surprise to them. Active and Passive Gerunds Past Active Gerund: Having invited them to her wedding made her feel good. (She was glad she had invited them to the wedding). I’m happy having been your teacher. (I’m happy now that I was your teacher before now). Past Passive Gerund: They were so happy having been invited to her wedding. (They were so happy that they had been invited). I hope you’re happy having been taught by me. (I hope that you’re happy now that I taught you in the past). Active and Passive Gerunds You’re probably wondering why you would ever need to use something as complicated as a past passive gerund... It helps us express something that happened one step back in the past (similar to past perfect).EXAMPLE:I hate being ignored (in general).I’m so upset at having been ignored. (I was ignored yesterday at the meeting. It bothers meNOWthat they ignored meTHEN).
http://www.slideshare.net/holly_cin/gerunds-15361651
4.09375
A close-up view of swirling clouds of gas and dust around young stars has given astronomers a new glimpse into how baby solar systems and their planets are formed. After stars are born they usually retain clouds of leftover material around them that condense into rings called protoplanetary disks. Over time, the gas and dust in these disks clump together under the pull of gravity to build planets. In the new study, astronomers peered into a group of nascent solar systems with unprecedented detail by combining the light collected by the two Keck telescopes on Mauna Kea in Hawaii. This allowed them to achieve the extremely fine resolution necessary to observe processes that occur at the border between a star 500 light-years from Earth and its surrounding disk of gas and dust. The view, scientists said, is comparable to standing on a rooftop in Tucson, Ariz., and trying to observe an ant nibbling on a grain of rice in New York?s Central Park on the other side of the United States. "We were able to get really, really close to the star and look right at the interface between the gas-rich protoplanetary disk and the star," said lead researcher Joshua Eisner, an astronomer at the University of Arizona. The researchers looked at 15 young stars with protoplanetary disks in our Milky Way galaxy. All the stars weighed between half and 10 times the mass of the sun. The astronomers were able to distinguish between gas ? mostly made of hydrogen atoms ? and dust in the disks to parse out what was happening in these budding solar systems, which were roughly a few million years old. [The strangest alien planets.] "These disks will be around for a few million years more," Eisner said. "By that time, the first planets, gas giants similar to Jupiter and Saturn, may form, using up a lot of the disk material." Scientists think gas giant planets can form quickly in only a few million years. After these giants use up most of the gas in the disk, the leftover dust and rock will cluster together to form the rocky terrestrial planets, such as Earth, Mars and Venus. The new observations could also help astronomers understand how stars grow in size by sucking up some of the gas from their surrounding disks. Scientists think stars accrete this matter in two ways. In one method, the gas washes up directly to the surface of the star, and then is incorporated into the star's body. In another mechanism, a star's powerful magnetic field can push away surrounding material, creating an empty envelope between the star and its disk. Atoms from the disk can then accelerate along the magnetic field lines into the star. "Once trapped in the star?s magnetic field, the gas is being funneled along the field lines arching out high above and below the disk?s plane," Eisner explained. "The material then crashes into the star?s polar regions at high velocities." The researchers detail their findings in an upcoming issue of the Astrophysical Journal. - Photos - The Strangest Alien Planets - Amazing Milky Way Images - Top 10 Extreme Planet Facts
http://www.space.com/8605-solar-system-baby-photos-reveal-planets-form.html
4.03125
It's very interesting to wonder what life would have been like in a normal Aztec society family. There are many things we do know, although the record is frustratingly sparse. Record keepers were more interested in other aspects of society, and family life was considered the sphere of women. Still, there are many things we do know. Like other aspects of Aztec culture, life in an Aztec society family was permeated by religious beliefs, right from the start. Each decision was ruled by the laws of religion, and often tied to the sacred days in the Aztec calendar. The life of a new family began at marriage, typically in the early 20s for a man and mid-teens for the woman. Marriages were arranged by the relatives (though the children may have had input). The parents would have to talk to the religious leaders, and discuss the signs under which both of the children had been born. The wedding day, of course, was chosen for similar religious reasons. All this was full of ceremony and form. In Aztec society family a husband may have had more than one wife - but it would be his primary wife that would go through all the ceremony. The man may have many secondary wives, who would also be officially recognized. The children of the principal wife would be the inheritors - or, in the case of a ruler, only a child from the principal wife would be a successor. Still, the husband was supposed to treat all wives equally in daily life. As you may imagine, one family could grow very large. As a result, most of the husbands with numerous wives and children were the wealthy ones, with the poor more likely to have one wife. In one sense, society was dominated by the men. The man was considered the head of the home. However, women had a great deal of power as well. They may have had more power in earlier times, with men taking more power toward the end of the Aztec era. Women often were able to run business out of their homes, and had a lot of influence in the family and the raising of children. The older widows were much respected, and people listened to their advice. Adultery was a crime - death was the punishment. Divorce was allowed on certain grounds, presented by the man or woman, property was divided equally and both sides were free. (more on Aztec crime and punishment here) Marriage marked the entrance into Aztec adult and independat society. The family was given a piece of land, and they would have their own home. Depending on their situation, both the man and the woman may be involved in working the land. Of course, while a woman was involved in household tasks, a man would be more likely to become a warrior. Though there were many occupations (farmer, priest, doctor, etc), being an Aztec warrior was particularly glorified. War was even used as a symbol of childbirth. The baby was a "captive" in the womb, struggling to be victorious. The woman, too, was in a battle. In fact, in many ways a woman who died in childbirth was glorified in the same way as a warrior who died in battle, and honoured for her courage. A child was welcomed into the world and into the religious system. A hymn for the new child to the goddess of child birth went like this: Down there, where Ayopechcatl lives, the jewel is born, a child has come into the world. It is down there, in her own place, that the children are born. Come, come here, new-born child, come here. Come, come here, jewel-child, come here. (from the Codex Florentino) Fathers taking their sons to school I've written a little more about occupations and education here. Education, at least in the early years, was the responsibility of the parents. The father would teach the sons, and the mother the daughters. Work and education then would a big part of the Aztec society family. Work could also break up the family - the father might travel, or in the case of warriors may die on the battlefield. As I mentioned, discipline was often harsh. Up until the age of 8, the preferred method of discipline was simply verbal. But harsh punishments would be in store for the older child, as he was prepared for the harsher realities of Aztec life. As children grew older, parents would still be in charge of education, but they would more often send the children to school. There were various branches of education that children would be involved in. If a family member escaped death on the battlefield or death from illness and so on... they would be among the ueuetque - the wise elders of society. They would offer advice, either informally or on a council. Of course, they were held in high regard in the family itself. The elderly were important in the Aztec society family, and their health care, aging and death was also a matter of ritual and religion. An Aztec society family was ruled in many ways by religion, tradition, and structure. Life was ruled by fate - from beginning to end your family life, occupation, and success depended on the important dates in your life and the structure of the universe and the nature of the gods. At the same time, life was full of celebration, hard work, joy, sorrow, and love, much as it has been in societies around the world for all of history. For more on Aztec society family, I recommend Daily Life of the Aztecs by Jacques Soustelle.
http://www.aztec-history.com/aztec-society-family.html
4.1875
Rosa Parks held no elected office. She was not born into wealth or power. Yet sixty years ago today, Rosa Parks changed America. Refusing to give up a seat on a segregated bus was the simplest of gestures, but her grace, dignity, and refusal to tolerate injustice helped spark a Civil Rights Movement that spread across America. Just a few days after Rosa Parks’ arrest in Montgomery, Alabama, a little-known, 26 year-old pastor named Martin Luther King Jr. stood by her side, along with thousands of her fellow citizens. Together, they began a boycott. Three-hundred and eighty-five days later, the Montgomery buses were desegregated, and the entire foundation of Jim Crow began to crumble. montgomery bus boycott More Than Equals, co-authored by Chris Rice and the late Spencer Perkins, is considered one of the pivotal books in the Christian racial reconciliation movement that found its greatest momentum in the early and mid-1990s. On Feb. 1, 1960, four African-American students sat down at the "whites-only" lunch counter at the F.W. Woolworth store in Greensboro, North Carolina. As I child, I was told by my late father that he took his youth group to participate in these sit-ins.
https://sojo.net/tags/montgomery-bus-boycott
4.15625
July 9, 2008 Scientists Find Evidence Of Water On The Moon Scientists have decided that evidence collected from the surface of the Moon almost 40 years ago shows that water existed there since its infancy. Small green and orange pebble-like beads collected decades ago from the Moon's surface were used to analyze the lunar sand samples that are thought to be some 3 billion years of age.The researchers believe these samples could support evidence that water persists in the shadowed craters of the Moon's surface and that it is native to the moon as opposed to being carried there by comets. Alberto Saal, assistant professor of geological sciences at Brown University believes that the water was contained in magmas erupted from fire fountains onto the surface of the Moon more than 3 billion years ago. About 95 percent of the water vapor from the magma was lost to space during this eruptive "degassing". He said that if the Moon's volcanoes released 95 percent of their water, it was possible that traces of water vapor may have drifted toward the cold poles of the Moon, where they may remain as ice in permanently shadowed craters. He noted that several lunar missions have found just such evidence. A technique called secondary ion mass spectrometry or SIMS, can detect minute amounts of elements in samples. Erik Hauri of the Carnegie Institution for Science in Washington developed the technique along with his research team to find evidence of water in the Earth's molten mantle. "Then one day I said, 'Look, why don't we go and try it on the Moon glass?'" Saal said. It took them three years to convince NASA to fund a study of the samples brought back by astronauts during the Apollo missions in the 1970s. After careful analysis of 40 of the tiny glass beads, which were broken apart, they discovered evidence that overturned decades of conventional wisdom that the moon is dry. Saal, Hauri and colleagues did not find water directly, but they did measure hydrogen, and it resembled the measurements they have done to detect hydrogen, and eventually water, in samples from Earth's mantle. They found that the hydrogen in the sample vaporized during volcanic activity would be similar to lava spurts seen on Earth today. "We looked at many factors over a wide range of cooling rates that would affect all the volatiles simultaneously and came up with the right mix," said James Van Orman, a former Carnegie researcher now at Case Western Reserve University. Hauri said the findings suggest the possibility that the moon's interior might have had as much water as the Earth's upper mantle. "It suggests that water was present within the Earth before the giant collision that formed the Moon," Saal said. "That points to two possibilities: Water either was not completely vaporized in that collision or it was added a short time "“ less than 100 million years "“ afterward by volatiles introduced from the outside, such as with meteorites." NASA plans to send its Lunar Reconnaissance Orbiter later this year to search for evidence of water ice at the Moon's south pole. If water is found, the researchers may have figured out the origin. Saal and his research team's study was published in the July edition of the journal Nature. Image Caption: Watery Glasses Researchers led by Brown geologist Alberto Saal analyzed lunar volcanic glasses, such these gathered by the Apollo 15 mission, and used a new analytic technique to detect water. The discovery strongly suggests that water has been a part of the Moon since its early existence "“ and perhaps since it was first created. Credit: NASA On the Net:
http://www.redorbit.com/news/space/1470218/scientists_find_evidence_of_water_on_the_moon/
4.0625
The right hand rule is a way to predict the direction of a force in a magnetic field. To predict the behavior of positive charges, use your right hand. To predict the behavior of negative charges, use your left hand. If your thumb points in the direction of the velocity and your fingers point in the direction of the magnetic field, your palm points in the direction of the force. So let's talk about the Right Hand Rule. This is one of the most major things that comes up when you're studying magnetic fields for the first time and really it first comes up when you do cross products maybe in pre-calculus but people kind of forget or maybe haven't taken pre-calc so let's talk about it because it's not difficult but its easy to kind of mess up if you're not used to how it works and I'll show you 3 different Right Hand Rules actually kind of 4 but really 3 all the same and then one is a little different. Let's just go through it and just see how it works. Alright so we start off with the Lorentz force law f equals qv cross b. Alright cross products work like this, you take your right hand, you put your thumb in the direction of the first vector your fingers in the direction of the second vector and your palm points in the direction of the cross product so when we're doing this with the Lorentz force law, first vector velocity so that means my thumb always has to play the role of the velocity. Second vector magnetic field, so that means my fingers have to play the role of the magnetic field the cross product gives the force so that means my palm is always in the direction of the force. Alright, so let's do a little bit of work with this but first of all I have to show you open major convention that you may or may not be aware of. Magnetic fields have to be in three dimensions but look I'm drawing everything on the board, that board only represents a two dimensional space so I can indicate over I can indicate up but how do I indicate out or in. The way that we do that is we have this convention we say look whenever you see a cross that means that you're talking about a vector that is pointing into the board okay? Basically you can think about it like you know when I put a vector like that it's a arrow what would it look like if the arrow was pointing into the board? Well you'd see the feathers and so that's what the cross is, the feathers. What if it's pointing out of the board? Well now I'm going to see the arrow tip so I just make a little dot now sometimes I'll circle that to indicate that its not just an errant dot that I just put on there but sometimes I'm not really that worried about it for example if I've got lot's of them, it's obvious that this represents the magnetic field so in this case I've got a positive charge moving downward in a magnetic field that's directed into the board. Alright here we go thumb is the velocity, fingers are the magnetic field and notice that my palm is now pointing to the right so that's the direction of the force on this charge to the right. Alright, let's do this one. What about if the magnetic field is down but the positive charge is going into the board? Alright, thumb, fingers and now I've got a force that's directed to the left alright. What about here? This is weird because now I don't have the velocity what I've got instead is the force and the magnetic field but that's still fun I can still do exactly the same thing. I don't have a velocity so I don't know what I'm doing with my thumb yet, but I do have a magnetic field so that's comming out right? I've got a force so that means my palm has to point down and look at that! My thumb is now pointing in that direction so that must be the direction of a positive charge yet it feels a force down alright? One more a little twist, what if it's a negative charge? Now there's a really easy answer to that, you just pretend it's a positive charge and then just do whatever is the opposite of that but there's another way which is actually more useful in practice because the electrons have negative charge so a lot of times on these exams you'll be asked about electrons a lot and you don't want to have to always do it as if it was positive and then just not listen to it, so what you do instead is you use your left hand alright? So negative charges you use your left hand positive charges you use your right hand and as soon as I recognize that I'm going to use my left hand, everything goes exactly the same way and now the force into and that's the way that it goes. Now you might wonder what happens to the charge after it goes into the magnetic field, well it turns out that because the force is always perpendicular to the velocity, charges that are moving in magnetic fields always move in circles that's called l'armoire precession so we can actually see that in each of the examples so it's a really easy idea if I've got a charge that's coming down and a force that's going to the right boom that's the l'armoire circle alright? What about here? Well I got a charge that's going in force to the left so here it is l'armoire circle alright? What about here? Now I'm going this way force is down l'armoire circle and how about this guy? Force is into so it's going be l'armoire circle I can't write that one right? But you see that it will always circle around the magnetic field lines alright that's the first and probably most useful form of the right hand rule but let's look at a couple of the other ones over here. Alright, the first one that I want to mention and this one's really the exactly the same really is what happens when I've got a current in a magnetic field. Well currents are moving charges so that means that just got a lot of charges moving in this magnetic field. Current is going to be in the direction of the velocity so I just say, okay instead of velocity my thumb is the current boom boom left done, very very very simple and basically the same it's just instead of velocity my thumb now represents current. Most of the time we take the convention that the arrow here associated with the current is the direction of positive charge so it's right hand all the time unless they tell you explicitly that negative charges are moving in this direction and then of course just left hand. Alright now, there's two other right hand rules and these are associated with magnetic fields that come from currents so this is associated with something called the biosovart law or something called amperes law so the idea is that whenever you've got a current like this, there will be magnetic field associated with it so if I got a current that goes like that, there's going to be a magnetic field that circulates around this current alright so this is it's a different physical situation we can't expect the right hand rule to be exactly the same but hopefully in this case it's almost the same. Thumb current, fingers again are the magnetic field but rather than keeping them out like that here's what we're going to do we're going to act like we're grabbing the wire alright? So we're going to grab the wire and our fingers are the magnetic field so that means that in this case the magnetic field will circulate around just like that in exactly the way that my fingers are circulating around it if I grab it so that means that above the current the magnetic field is coming out of the board and below it's going into so there it is, I've got magnetic field circulating around my wire in exactly that way. Alright, here's the last one and this one is kind of the, the most different alright, but it's also very useful. What if I have a current loop? Alright, well I could play this game just like we just did and I could say "alright well let me grab the wire"5 okay? Well if I grab the wire like that with my thumb in the direction of the current then the magnetic field inside will be coming out of the board and outside will be going into the board so this is exactly the same as we just had no difference so why I'm I saying it's different? Well because will apply the right hand rule in a slightly different way here okay. You don't have to do this, you can always do it this way but sometimes it's more useful to instead put your fingers in the direction of the current and then your thumb will point in the direction of the magnetic field at the center of the current loop out, of course it gives us the same answer that we got the other way but this is associated with something called a magnetic moment and so you might be asked to think about magnetic moments and these current loops and it's easier when you're focusing on that to use the right hand rule when now your current is your fingers and the thumb is the magnetic field.
https://www.brightstorm.com/science/physics/magnetism/right-hand-rule/
4.28125
Transuranium element, any of the chemical elements that lie beyond uranium in the periodic table—i.e., those with atomic numbers greater than 92. Twenty-six of these elements have been discovered and named or are awaiting confirmation of their discovery. Eleven of them, from neptunium through lawrencium, belong to the actinoid series. The others, which have atomic numbers higher than 103, are referred to as the transactinoids. All the transuranium elements are unstable, decaying radioactively, with half-lives that range from tens of millions of years to mere fractions of a second. Since only two of the transuranium elements have been found in nature (neptunium and plutonium) and those only in trace amounts, the synthesis of these elements through nuclear reactions has been an important source of knowledge about them. That knowledge has expanded scientific understanding of the fundamental structure of matter and makes it possible to predict the existence and basic properties of elements much heavier than any currently known. Present theory suggests that the maximum atomic number could be found to lie somewhere between 170 and 210, if nuclear instability would not preclude the existence of such elements. All these still-unknown elements are included in the transuranium group. Discovery of the first transuranium elements The first attempt to prepare a transuranium element was made in 1934 in Rome, where a team of Italian physicists headed by Enrico Fermi and Emilio Segrè bombarded uranium nuclei with free neutrons. Although transuranium species may have been produced, the experiment resulted in the discovery of nuclear fission rather than new elements. (The German scientists Otto Hahn, Fritz Strassman, and Lise Meitner showed that the products Fermi found were lighter, known elements formed by the splitting, or fission, of uranium.) Not until 1940 was a transuranium element first positively produced and identified, when two American physicists, Edwin Mattison McMillan and Philip Hauge Abelson, working at the University of California at Berkeley, exposed uranium oxide to neutrons from a cyclotron target. One of the resulting products was an element found to have an atomic number of 93. It was named neptunium. Transformations in atomic nuclei are represented by equations that balance all the particles of matter and the energy involved before and after the reaction. The above transformation of uranium into neptunium may be written as follows: In the first equation the atomic symbol of the particular isotope reacted upon, in this case U for uranium, is given with its mass number at upper left and its atomic number at lower left: 23892U. The uranium-238 isotope reacts with a neutron (symbolized n, with its mass number 1 at upper left and its neutral electrical charge shown as 0 at lower left) to produce uranium-239 (23992U) and the quantum of energy called a gamma ray (γ). In the next equation the arrow represents a spontaneous loss of a negative beta particle (symbolized β−), an electron with very high velocity, from the nucleus of uranium-239. What has happened is that a neutron within the nucleus has been transformed into a proton, with the emission of a beta particle that carries off a single negative charge; the resulting nucleus now has one more positive charge than it had before the event and thus has an atomic number of 93. Because the beta particle has negligible mass, the mass number of the nucleus has not changed, however, and is still 239. The nucleus resulting from these events is an isotope of the element neptunium, atomic number 93 and mass number 239. The above process is called negative beta-particle decay. A nucleus may also emit a positron, or positive electron, thus changing a proton into a neutron and reducing the positive charge by one (but without changing the mass number); this process is called positive beta-particle decay. In another type of beta decay a nuclear proton is transformed into a neutron when the nucleus, instead of emitting a beta particle, “captures,” or absorbs, one of the electrons orbiting the nucleus; this process of electron capture (EC decay) is preferred over positron emission in transuranium nuclei. The discovery of the next element after neptunium followed rapidly. In 1941 three American chemists, Glenn T. Seaborg, Joseph W. Kennedy, and Arthur C. Wahl, produced and chemically identified element 94, named plutonium (Pu). In 1944, after further discoveries, Seaborg hypothesized that a new series of elements called the actinoid series, akin to the lanthanoid series (elements 58–71), was being produced, and that this new series began with thorium (Th), atomic number 90. Thereafter, discoveries were sought, and made, in accordance with this hypothesis. Synthesis of transuranium elements The most abundant isotope of neptunium is neptunium-237. Neptunium-237 has a half-life of 2.1 × 106 years and decays by the emission of alpha particles. (Alpha particles are composed of two neutrons and two protons and are actually the very stable nucleus of helium.) Neptunium-237 is formed in kilogram quantities as a by-product of the large-scale production of plutonium in nuclear reactors. This isotope is synthesized from the reactor fuel uranium-235 by the reaction and from uranium-238 by Because of its ability to undergo fission with neutrons of all energies, plutonium-239 has considerable practical applications as an energy source in nuclear weapons and as fuel in nuclear power reactors. The method of element production discussed thus far has been that of successive neutron capture resulting from the continuous intensive irradiation with slow (low-energy) neutrons of an actinoid target. The sequence of nuclides that can be synthesized in nuclear reactors by this process is shown in the figure, in which the light line indicates the principal path of neutron capture (horizontal arrows) and negative beta-particle decay (up arrows) that results in successively heavier elements and higher atomic numbers. (Down arrows represent electron-capture decay.) The heavier lines show subsidiary paths that augment the major path. The major path terminates at fermium-257, because the short half-life of the next fermium isotope (fermium-258)—for radioactive decay by spontaneous fission (370 microseconds)—precludes its production and the production of isotopes of elements beyond fermium by this means. Heavy isotopes of some transuranium elements are also produced in nuclear explosions. Typically, in such events, a uranium target is bombarded by a high number of fast (high-energy) neutrons for a small fraction of a second, a process known as rapid-neutron capture, or the r-process (in contrast to the slow-neutron capture, or s-process, described above). Underground detonations of nuclear explosive devices during the late 1960s resulted in the production of significant quantities of einsteinium and fermium isotopes, which were separated from rock debris by mining techniques and chemical processing. Again, the heaviest isotope found was that of fermium-257. An important method of synthesizing transuranium isotopes is by bombarding heavy element targets not with neutrons but with light charged particles (such as the helium nuclei mentioned above as alpha particles) from accelerators. For the synthesis of elements heavier than mendelevium, so-called heavy ions (with atomic number greater than 2 and mass number greater than 5) have been used for the projectile nuclei. Targets and projectiles relatively rich in neutrons are required so that the resulting nuclei will have sufficiently high neutron numbers; too low a neutron number renders the nucleus extremely unstable and unobservable because of its resultantly short half-life. The elements from seaborgium to copernicium have been synthesized and identified (i.e., discovered) by the use of “cold,” or “soft,” fusion reactions. In this type of reaction, medium-weight projectiles are fused to target nuclei with protons numbering close to 82 and neutrons numbering about 126—i.e., near the doubly “magic” lead-208—resulting in a relatively “cold” compound system. The elements from 113 to 118 were made using “hot” fusion reactions, similar to those described above using alpha particles, in which a relatively light projectile collides with a heavier actinoid. Because the compound nuclei formed in cold fusion have lower excitation energies than those produced in hot fusion, they may emit only one or two neutrons and thus have a much higher probability of remaining intact instead of undergoing the competing prompt fission reaction. (Nuclei formed in hot fusion have higher excitation energy and emit three to five neutrons.) Cold fusion reactions were first recognized as a method for the synthesis of heavy elements by Yuri Oganessian of the Joint Institute for Nuclear Research at Dubna in the U.S.S.R. (now in Russia). Isotopes of the transuranium elements are radioactive in the usual ways: they decay by emitting alpha particles, beta particles, and gamma rays; and they also fission spontaneously. The table lists significant nuclear properties of certain isotopes that are useful for chemical studies. Only the principal mode of decay is given, though in many cases other modes of decay also are exhibited by the isotope. In particular, with the isotope californium-252, alpha-particle decay is important because it determines the half-life, but the expected applications of the isotope exploit its spontaneous fission decay that produces an enormous neutron output. Other isotopes, such as plutonium-238, are useful because of their relatively large thermal power output during decay (given in the table in watts per gram). Research on the chemical and solid-state properties of these elements and their compounds obviously requires that isotopes with long half-lives be used. Isotopes of plutonium and curium, for example, are particularly desirable from this point of view. Beyond element 100 the isotopes must be produced by charged-particle reactions using particle accelerators, with the result that only relatively few atoms can be made at any one time. |name and mass||principal decay mode||half-life||disintegrations per minute per microgram||watts per gram*| |plutonium-239||alpha||24,110 years||138,000||1.91 | |berkelium-249||beta (minus)||330 days||3.6(109)||0.358| |mendelevium-256||electron capture||77 minutes| |seaborgium-265||spontaneous fission||8 seconds| |*Thermal power output. | **Indicates an approximate value. Nuclear structure and stability Although the decay properties of the transuranium elements are important with regard to the potential application of the elements, these elements have been studied largely to develop a fundamental understanding of nuclear reactions and nuclear and atomic structure. Study of the known transuranium elements also helps in predicting the properties of yet-undiscovered isotopes and elements as a guide to the researcher who can then design experiments to prepare and identify them. As shown in the figure, the known isotopes can be represented graphically with the number of nuclear protons (Z) plotted along the left-hand axis and the number of neutrons (N) plotted on the top axis. The relative stabilities of the isotopes are indicated by their relative heights. In this metaphoric representation, the known isotopes resemble a peninsula rising above a sea of instability. The most stable isotopes, appearing as mountaintops, occur at specific values called magic numbers. The magic numbers derive from calculations of the energy distribution based on the theoretical structure of the nucleus. According to theory, neutrons and protons (collectively, nucleons) are arranged within the nucleus in shells that are able to accommodate only fixed maximum numbers of them; when the shells are closed (i.e., unable to accept any more nucleons), the nucleus is much more stable than when the shells are only partially filled. The number of neutrons or protons in the closed shells yields the magic numbers. These are 2, 8, 20, 28, 50, 82, and 126. Doubly magic nuclei, such as helium-4, oxygen-16, calcium-40, calcium-48, and lead-208, which have both full proton shells and full neutron shells, are especially stable. As the proton and neutron numbers depart further and further from the magic numbers, the nuclei are relatively less stable. As the highest atomic numbers are reached, decay by alpha-particle emission and spontaneous fission sets in (see below). At some point the peninsula of relatively stable isotopes (i.e., with an overall half-life of at least one second) is terminated. There has been, however, considerable speculation, based on a number of theoretical calculations, that an island of stability might exist in the neighbourhood of Z = 114 and N = 184, both of which are thought to be magic numbers. The longest-lasting isotope of flerovium, element 114, has N = 175 and a half-life of 2.7 seconds; this long half-life could be the “shore” of the island of stability. Isotopes in this region have significantly longer half-lives than neighbouring isotopes with fewer neutrons. There is also evidence for subshells (regions of somewhat increased stability) at Z = 108 and N = 162. Processes of nuclear decay The correlation and prediction of nuclear properties in the transuranium region are based on systematics (that is, extensions of observed relationships) and on the development of theoretical models of nuclear structure. The development of structural theories of the nucleus has proceeded rather rapidly, in part because valid parallels with atomic and molecular theory can be drawn. A nucleus can decay to an alpha particle plus a daughter product if the mass of the nucleus is greater than the sum of the mass of the daughter product and the mass of the alpha particle—i.e., if some mass is lost during the transformation. The amount of matter defined by the difference between reacting mass and product mass is transformed into energy and is released mainly with the alpha particle. The relationship is given by Einstein’s equation E = mc2, in which the product of the mass (m) and the square of the velocity of light (c) equals the energy (E) produced by the transformation of that mass into energy. It can be shown that, because of the inequality between the mass of a nucleus and the masses of the products, most nuclei beyond about the middle of the periodic table are likely to be unstable because of the emission of alpha particles. In practice, however, because of the reaction rate, decay by ejection of an alpha particle is important only with the heavier elements. Indeed, beyond bismuth (element 83) the predominant mode of decay is by alpha-particle emission, and all the transuranium elements have isotopes that are alpha-unstable. The regularities in the alpha-particle decay energies that have been noted from experimental data can be plotted on a graph and, since the alpha-particle decay half-life depends in a regular way on the alpha-particle decay energy, the graph can be used to obtain the estimated half-lives of undiscovered elements and isotopes. Such predicted half-lives are essential for experiments designed to discover new elements and new isotopes, because the experiments must take the expected half-life into account. In elements lighter than lead, beta-particle decay—in which a neutron is transformed into a proton or vice versa by emission of either an electron or a positron or by electron capture—is the main type of decay observed. Beta-particle decay also occurs in the transuranium elements, but only by emission of electrons or by capture of orbital electrons; positron emission has not been observed in transuranium elements. When the beta-particle decay processes are absent in transuranium isotopes, the isotopes are said to be stable to beta decay. Decay by spontaneous fission The lighter actinoids such as uranium rarely decay by spontaneous fission, but at californium (element 98) spontaneous fission becomes more common (as a result of changes in energy balances) and begins to compete favourably with alpha-particle emission as a mode of decay. Regularities have been observed for this process in the very heavy element region. If the half-life of spontaneous fission is plotted against the ratio of the square of the number of protons (Z) in the nucleus divided by the mass of the nucleus (A)—i.e., the ratio Z2/A—then a regular pattern results for nuclei with even numbers of both neutrons and protons (even-even nuclei). Although this uniformity allows very rough predictions of half-lives for undiscovered isotopes, the methods actually employed are considerably more sophisticated. The results of study of half-life systematics for alpha-particle, negative beta-particle, and spontaneous-fission decay in the near region of undiscovered transuranium elements can be plotted in graphs for even-even nuclei, for nuclei with an odd number of protons or neutrons, and for odd-odd nuclei (those with odd numbers for both protons and neutrons). These predicted values are in the general range of experimentally determined half-lives and correctly indicate trends, but individual points may differ appreciably from known experimental data. Such graphs show that isotopes with odd numbers of neutrons or protons have longer half-lives for alpha-particle decay and for spontaneous fission than do neighbouring even-even isotopes. Nuclear structure and shape Several models have been used to describe nuclei and their properties. In the liquid-drop model the nucleus is treated as a uniform, charged drop of liquid. This structure does not account for certain irregularities, however, such as the increased stability found for nuclei with particular magic numbers of protons or neutrons (see above). The shell model recognized that these magic numbers resulted from the filling, or closing, of nuclear shells. Nuclei with the exact number (or close to the exact number) of neutrons and protons dictated by closed shells have spherical shapes, and their properties are successfully described by the shell theory. However, the lanthanoid and actinoid nuclei, which do not have magic numbers of nucleons, are deformed into a prolate spheroid, or football, shape, and the spherical-shell model does not adequately explain their properties. The shell model nevertheless established the fact that the neutrons and protons within a nucleus are more likely to be found inside rather than outside certain nuclear shell regions and thus showed that the interior of the nucleus is inhomogeneous. A model incorporating the shell effects to correct the ordinary homogeneous liquid-drop model was developed. This hybrid model is used, in particular, to explain spontaneous-fission half-lives. Since many transuranium nuclei do not have magic numbers of neutrons and protons and thus are nonspherical, considerable theoretical work has been done to describe the motions of the nucleons in their orbitals outside the spherical closed shells. These orbitals are important in explaining and predicting some of the nuclear properties of the transuranium and heavy elements. The mutual interaction of fission theory and experiment brought about the discovery and interpretation of fission isomers. At Dubna, Russia, U.S.S.R., in 1962, americium-242 was produced in a new form that decayed with a spontaneous-fission half-life of 14 milliseconds, or about 1014 times shorter than the half-life of the ordinary form of that isotope. Subsequently, more than 30 other examples of this type of behaviour were found in the transuranium region. The nature of these new forms of spontaneously fissioning nuclei was believed to be explainable, in general terms at least, by the idea that the nuclei possess greatly distorted but quasi-stable nuclear shapes. The greatly distorted shapes are called isomeric states, and these new forms of nuclear matter are consequently called shape isomers. As mentioned above, calculations relating to spontaneous fission involve treating the nucleus as though it were an inhomogeneous liquid drop, and in practice this is done by incorporating a shell correction to the homogeneous liquid-drop model. In this case an apparently reasonable way to amalgamate the shell and liquid-drop energies was proposed, and the remarkable result obtained through the use of this method reveals that nuclei in the region of thorium through curium possess two energetically stable states with two different nuclear shapes. This theoretical result furnished a most natural explanation for the new form of fission, first discovered in americium-242. This interpretation of a new nuclear structure is of great importance, but it has significance far beyond itself because the theoretical method and other novel approaches to calculation of nuclear stability have been used to predict an island of stability beyond the point at which the peninsula in the figure disappears into the sea of instability.
http://www.britannica.com/science/transuranium-element
4.21875
A-level Biology/Cells< A-level Biology - 1 Cell Structure - 2 Analysis of Cell Compounds - 3 Plasma Membranes - 4 Cholera Organelles are parts of cells. Each organelle has a specific function. - Largest organelle - Surrounded by a nuclear envelope, which contains pores (holes) - Contains chromatin and the nucleolus - Store the genetic material - Controls the cell's activities - Pores allow substances to move between the nucleus and the cytoplasm - The nucleolus makes ribosomes (see below) - Oval shaped - They have a double membrane - the inner one is folded to form structures called cristae - Inside is the matrix, containing enzymes - They are the site of aerobic respiration - Makes energy in the form of ATP (adenosine triphosphate) as a source of energy for the cell's activities - Cristae give a bigger surface area so more enzymes can fit in - Smooth endoplasmic reticulum is a system of membranes which enclose a fluid-filled space - Rough endoplasmic reticulum is similar, but covered in ribosomes - Smooth endoplasmic reticulum synthesises and processes lipids and carbohydrates - Rough endoplasmic reticulum folds and processes proteins that have been made at the ribosomes, transports proteins around the cell. - A group of fluid-filled, flattened sacs - Processes and packages new lipids and proteins - Once finished, it makes vesicles which transport the molecules to the edge of the cell for ejection - Makes lysosomes - Very small - Either floats free in the cytoplasm or is attached to rough endoplasmic reticulum - The site where protein synthesis takes place - No clear internal structure - Contains digestive enzymes which can be used to digest invading cells or break down worn-out organelles (autolysis) - These are folds in the plasma membrane - Found in cells involved in absorption - Stereotypically found on the villi in the small intestine - Increase the surface area of the plasma membrane Found on the surface of animal cells, it's mainly made of lipids and proteins. It controls the movement of substances in and out of the cell; further explanation can be found later in this book. - Found in plant cells only - Inner membrane is folded to form stacks of grana - Molecules of chlorophyll are on the grana - Chlorophyll captures photons of light used for photosynthesis Refer to the below table for the differences between plant and animal cells. |Typical animal cell||Typical plant cell| Prokaryotes and EukaryotesEdit Eukaryotic cells are complex, and include all animal and plant cells. Prokaryotic cells are smaller and simpler, like bacteria. The table below is a comparison of prokaryotic and eukaryotic cells: |Typical organisms||bacteria||fungi, plants, animals| |Typical size||~ 1-10 µm||~ 10-100 µm (sperm cells, apart from the tail, are smaller)| |Type of nucleus||none||nucleus with double membrane| |Genetic material||ring of DNA, plasmids||chromosomes| |Ribosomes||Smaller (18nm)||Larger (22nm)| |Cytoplasmatic structure||very few structures||highly structured by endomembranes and a cytoskeleton| |Mitochondria||none||one to several thousand (though some lack mitochondria)| |Chloroplasts||none||in algae and plants| |Organization||usually single cells||single cells, colonies, higher multicellular organisms with specialized cells| |Cell division||Binary fission (simple division)||Mitosis Analysis of Cell CompoundsEdit Units of Size in MicroscopyEdit - The basic biological unit of measure is the micrometer (µm). - 1000µm = 1mm. Calculations in MicroscopyEdit Size in real life = Size in image ÷ Magnification. Magnification = Size in image ÷ Size in real life. - Measure the size of the image in millimetres. - Convert to micrometers by multiplying by 1000. Example: A micrograph shows mitochondrion 210mm magnified 2500x. What is its size in real life? 210mm x 1000 = 210,000µm 210,000µm ÷ 2500 = 84µm Example: An object is 130µm in real life and 52mm in an image. What is the magnification? 52mm x 1000 = 52,000µm 52,000µm ÷ 130µm = 400x - Light rays travel through the specimen and 2 lenses - The objective lens provides the initial magnification of the image - The eyepiece lens magnifies and focuses the image - Transmission electron microscopes (T.E.M.) pass a beam of electrons through the specimen to produce an image on a fluorescent screen. - Scanning electron microscopes (S.E.M.) scan a beam of electrons over the specimen. - Electromagnets focus the image. - Electrons are produced from a tungsten filament at the top of a column. - The column is a vacuum. As a result, living speciments cannot be used. - The preparation of specimens for microscopes can be drastic, and can produce artefacts. Artefacts are things you see under a microscope, but aren't actually there in real life. This could be due to something like an air bubble. - Magnification refers to how much bigger the image is than the actual specimen. - Resolution refers to how well a microscope distinguishes two different points that are close together. If a microscope can't separate two objects, then increasing the magnification won't help. Below is a table comparing the different types of microscope. |Depth of focus||Low||High||Medium| |Field of view||Good||Good||Limited| |Ease of specimen preparation||Easy||Fairly skilled||Skilled| |Speed of specimen preparation||Rapid||Quite rapid||Slow| - Cell fractionation breaks apart cells and separates its organelles. Step 1: HomogenisationEdit - This breaks open the cells. - Usually done by vibrating the cells, or grinding them up in a blender. - It is done with a cold, isotonic buffer: - cold to slow down and stop organelle activity, particularly the hydrolytic enzymes in lysosomes - isotonic to prevent the movement of water in and out of organelles by osmosis - a buffer to prevent changes in pH levels. Step 2: FiltrationEdit - Filter the solution through a gauze to remove debris e.g. large cell debris or tissue debris. Step 3: UltracentrifugationEdit - Spin the solution in a centrifuge at a low speed. - The heaviest organelles (nuclei, chloroplasts) fall the to the bottom. - The rest of the organelles stay suspended in the fluid above this sediment. This is the supernatant. - The supernatant is drained off, poured into another tube, and spun again at a higher speed. - This time, organelles like mitochondria and lysosomes fall to the bottom. - Again, the supernatant is drained off, poured into another tube and spun at a higher speed. - Finally, the lightest organelles remain. Plasma membranes are located around the edge of animal cells and surround the cytoplasm and other organelles. They are made up of a phospholipid bilayer which consists of two layer of phospholipids with the hydrophobic heads on the outer layers and the hydrophylic tails on the inner layers. lipid soluble molecules can diffuse straight through the bilayer, water can osmosise through as well. The phospholipid bilayer contains intrinsic and extrinsic proteins. The intrinsic proteins go all the way through the bilayer whereas extrinsic proteins only go through the outer phospholipid layer. Extrinsic Proteins are used for recognition of the cell and usually have a glycoprotein attached for the recognition. Intrinsic Proteins are for allowing molecules through. Ways the proteins are designed to allow molecules through are: Protein pump, Protein Channel, Gated protein channel. Cholera is a disease commonly found in dirty water. Once ingested it sits in the endo thelial reticulum of the gut and sets up an contransporter with Na+ ions . this lowers the water potential of the gut this means thatv water moves in down the concentration gradient meaning that the person is constantly dehydrated and has wet loose faeces. These affects can be countered with oral dehydration therapy.
https://en.m.wikibooks.org/wiki/A-level_Biology/Cells
4
There was little formal description of the corbicula before Carl Linnaeus explained the biological function of pollen in the mid-18th century. In English the first edition of Encyclopædia Britannica described the structure in 1771 without giving it any special name. The second edition, 1777, refers to the corbicula simply as the "basket". By 1802 William Kirby had introduced the Latin term corbicula into English. He had borrowed it, with acknowledgement, from Réaumur. This New Latin term, like many other Latin anatomical terms, had the advantages of specificity, international acceptability, and culture neutrality. By 1820 the term pollen-basket seems to have gained acceptance in beekeeping vernacular, though a century later a compendium of entomological terminology recognised pollen-plate and corbicula without including "pollen-basket". Yet another century, and authorities as eminent as the current authors of "Imms" included only the terms scopa and corbicula in the index, though they did include "pollen basket" in the text. The New Latin term corbicula is a diminutive of corbis, a basket or pannier. Corbula (not a term used in entomology) is given as the Late Latin diminutive, but at least one dictionary simply lists corbicula as a very small basket. Corbicula is the singular; its plural is corbiculae, reflecting the fact that in Latin the gender is feminine, but a troublesome confusion has arisen ever since at least one author c. 1866 assumed corbicula to be the plural of (an actually non-existent neuter form) corbiculum. The error has propagated through successive textbooks and reference works and troublesomely, it still is to be found as a minority misconception in modern publications. Structure and function Bees in four tribes of the family Apidae, subfamily Apinae: the honey bees, bumblebees, stingless bees, and orchid bees have corbiculae. The corbicula is a polished cavity surrounded by a fringe of hairs, into which the bee collects the pollen; most other bees possess a structure called the scopa, which is similar in function, but is a dense mass of branched hairs into which pollen is pressed, with pollen grains held in place in the narrow spaces between the hairs. A honey bee moistens the forelegs with its protruding tongue and brushes the pollen that has collected on its head, body and forward appendages to the hind legs. The pollen is transferred to the pollen comb on the hind legs and then combed, pressed, compacted, and transferred to the corbicula on the outside surface of the tibia of the hind legs. In Apis species, a single hair functions as a pin that secures the middle of the pollen load. Either Honey or nectar is used to moisten the dry pollen, producing the product known as bee pollen or bee bread. The mixing of the pollen with nectar or honey changes the color of the pollen. The color of the pollen can help identify the pollen source. - Society of Gentlemen in Scotland (1771). Encyclopaedia Britannica: Or, A Dictionary of Arts and Sciences, Compiled Upon a New Plan in which the Different Sciences and Arts are Digested Into Distinct Treatises Or Systems; and the Various Technical Terms, Etc., are Explained as They Occur in the Order of the Alphabet. Encyclopaedia Britannica. p. 89. - Bees. J. Balfour and Company W. Gordon. 1778. p. 440. - William Kirby (1802). Monographia Apum Angliae; Or, An Attempt to Divide Into Their Natural Genera and Families, Such Species of the Linnean Genus Apis as Have Been Discovered in England: with Descriptions and Observations. To which are Prefixed Some Introductory Remarks Upon the Class Hymenoptera, and a Synoptical Table of the Nomenclature of the External Parts of These Insects. With Plates. Vol. 1. [-2.]. By William Kirby .. p. 200. - A.B. Herbert; A.P. Beresford; Alexander Dedekind; Charles C. Miller Memorial Apicultural Library, Andrew Jameson, Auguste de Saint-Hilaire, Benjamin Kidd, Bouffier de Sauvages, C.P. Cory, Charles Bucke, Charles Henry Bennett, Coleman Phillips, Edward Latham Ormerod, F.C. Harrison, Francis Whishaw, Frank R. Chesire, Frederic William Lambert Sladen, George Hubbard, Harry Wallis Kew, Harvey Goodwin, Henry Noel Humphreys, Herbert S. Shorthouse, I. Hopkins, J. Perez, James Caldwell, James Cavanah Murphy, John Lowe, L.S., Lippi, M.M.M., Oliver Goldsmith, R.L. Maddox, René-Antoine Ferchault de Réaumur, Robert Huish, Shirley Hibberd, Society of Practical Gardeners, T. Slevan, Thomas Hale (Esq.), Thomas James, Thomas Lamb Phipson, Thorsley, Travers James Briant, Ul Lambotte, William Carr, William Dunbar, William Hyde Wollaston, Alfred Edward Thomas Watson, Institut Pasteur, Jan Dzierżon, John Martyn, Robert Barnabas Brough, Royal Microscopical Society (Great Britain), Sir William Watson Cheyne, Ephraim Chambers (1820). Of bees. Royal Microscopical Society. p. 396. Cite uses deprecated parameter - Smith, John. B. Explanation of terms used in entomology. Pub: Brooklyn Entomological Society 1906. May be downloaded from: - Richards, O. W.; Davies, R.G. (1977). Imms' General Textbook of Entomology: Volume 1: Structure, Physiology and Development Volume 2: Classification and Biology. Berlin: Springer. ISBN 0-412-61390-5. - Ainsworth, Robert; Eds: Morell, Thomas; Carey, John | An Abridgment of Ainsworth's Latin Dictionary, 13th ed. | London 1834 - Jaeger, Edmund Carroll (1959). A source-book of biological names and terms. Springfield, Ill: Thomas. ISBN 0-398-06179-3. - William Young (1810). A new Latin-English dictionary: To which is prefixed an English-Latin dictionary. A. Wilson. pp. 22–. - Alpheus Spring Packard (1868). Guide to the Study of Insects: And a Treatise on Those Injurious and Beneficial to Crops: for the Use of Colleges, Farm-schools, and Agriculturists. Naturalist's agency. p. 116. - Frederick Augustus Porter Barnard; Arnold Guyot; A.J. Johnson & Co (1890). Johnson's universal cyclopædia: a scientific and popular treasury of useful knowledge. A.J. Johnson. - George McGavin (1992). Insects of the northern hemisphere. Dragon's World. ISBN 978-1-85028-151-1. - James L. Castner (2000). Photographic atlas of entomology and guide to insect identification. Feline Press. ISBN 978-0-9625150-4-0. - George Gordh, Gordon Gordh, David Headrick, A Dictionary of Entomology, Science, 2003; 1040 pages; pg.713 - Bees (Hymenoptera: Apoidea: Apiformes) Encyclopedia of Entomology 2008. Vol. 2, pages 419–434 - Cedric Gillott, Entomology, Springer, 1995; 798 pages; pg. 79 - Dorothy Hodges, The Pollen Loads of the Honeybee, published by Bee Research Association, 1952
https://en.wikipedia.org/wiki/Pollen_basket
4.09375
Activity 8-1: What Are the Issues? Summary This activity is a class discussion about abortion. Students use a combination of ready-made and self-made scenarios to examine the moral and ethical issues of this potentially emotional subject. Undefined control sequence \checkmark clarify their personal views on abortion. Undefined control sequence \checkmark listen respectfully to the views of others. - Activity Report (one copy per group) - None if discussion is open to whole class - One copy of Activity Report per group if group work is selected The abortion issue can elicit emotional responses. Decide if you will handle the discussion as a whole class or by dividing the class into groups. If you divide the class into smaller groups, consider the groupings carefully. If needed, define the term “moral issues.” Estimated Time 40-45 minutes This activity can be extended depending on class interest. This activity has Guidance/Language Arts/Social Science/Science connections. It can be extended to include: Language Arts or Social Studies Allow 2-3 days for groups to prepare an oral, written, or visual presentation on “Abortion: Is it ever right?” Limit oral presentations to 5 minutes. Groups should use factual information when possible to avoid a purely emotional response. Prerequisites and Background Information Introduce Activity 8-1 by asking the class what they think makes an issue controversial. Can they list some controversial items that have been in the news the last few years? How should people discuss issues when they have strongly held differing views? Have they seen examples in the news of bad ways of disagreeing? Have they seen examples of positive ways? Review the ground rules for class discussions. Since these are sensitive issues it is recommended that the class be reminded that everyone is entitled to their views and beliefs. People should be understanding and respectful of other people's views. Address the issues-do not attack the person. Steps 1-2 Ask the students to respond to the following question: “Is it ever okay to have an abortion?” Then tell them to put their responses away. At the end of the class discussion they can refer back to their papers to see if they have changed their minds and why. Step 3 Read the directions to the Activity Report with your students. The introduction to this activity gives these scenarios: a. A woman is raped and later discovers she is pregnant. b. A pregnant woman feels she is not yet prepared to give birth and raise a child. c. The pregnant daughter of an abusive parent was told that she would be beaten “within an inch of your life” if she ever got pregnant. d. A doctor warns a pregnant mother that giving birth again could be fatal to her. e. A young, pregnant woman is told by her boyfriend that she must get an abortion or he will leave her. Students are then asked to write at least one realistic scenario involving a pregnant woman. These should be collected and screened for inclusion in the discussions. Step 4 Using any or all of these scenarios, lead the class in a discussion of the moral dilemma of abortion. For each scenario, bring the following questions into the discussion: a. What are the moral issues involved? Is abortion “right” or “wrong”? b. Who decides if something is “right” or “wrong”? c. Where does freedom of personal choice stop and responsibility to others start? d. Whose interests are at stake? The mother's? That of the fetus? Society? How do these conflicts get solved and by whom? e. When does the fetus become a “person”? Who decides? How do they decide? f. Do developing fetuses have a right to be born? g. Since we are all entitled to our religious and moral beliefs, how do we decide what law must apply to all people? An alternative to a whole-class discussion is a group discussion. Give each group a scenario to discuss among themselves, each in turn, as the rest of the class listens. The class can then ask the group to respond to their questions at the end of the group discussion. This will extend the class time and focus the attention of the class on a variety of scenarios. Conclude Activity 8-1 by allowing the students about five minutes to refer back to the original question they answered. Did they change their minds? Why? This can be their private, quiet time to reflect, and to calm down before their next class period. Use student responses during class discussions and the responses on the Activity Report to assess if students can Undefined control sequence \checkmark identify the complexity of the issues surrounding abortion. Undefined control sequence \checkmark explain the wide range of beliefs. Undefined control sequence \checkmark explain why these questions have no “absolute” answers. Undefined control sequence \checkmark identify the factors that affect an individual's opinion about abortion. Write a letter to someone who is contemplating an abortion. In your letter, discuss your personal views about abortion. - Sample answers to these questions will be provided upon request. Please send an email to [email protected] to request sample answers. - What are two common side effects of the IUD? - At what point in pregnancy do abortions become illegal in this country? - What does pro-life mean? What does pro-choice mean? Activity 8-1 Report: What Are the Issues? (Student Reproducible) Your teacher will set the “ground rules” for your group discussion on abortion. Then, your teacher will assign your group a scenario involving a pregnant woman who must make a decision regarding abortion. Think about and discuss the following as they apply to the scenario you are given: - What are the moral issues involved? By choosing or not choosing to have an abortion is she making the “right” or “wrong” choice? - Who decides if this choice is “right” or “wrong”? - Where does freedom of personal choice stop and responsibility to others begin? - Whose interests are involved? The mother's? That of the fetus? Society? How do these conflicts get resolved? - When does the fetus become a “person”? Who decides? How? - Does this developing fetus have a right to be born? - Since we are all entitled to our religious and moral beliefs, how do we decide what law must apply to all people? - Is it ever OK to have an abortion? There are no easy answers to these questions. Your generation, like the present generation of adults, has to face the challenging issues of abortion. So, give serious thought to these issues now.
http://www.ck12.org/tebook/Human-Biology-Reproduction-Teacher%2527s-Guide/section/9.3/
4.09375
The Continental Army was formed by the second continental congress after the outbreak of the American Revolutionary War by the colonies that became the United States of America. Established by a resolution of the Continental Congress on June 14, 1775, it was created to coordinate the military efforts of the Thirteen Colonies in their revolt against the rule of Great Britain. The Continental Army was supplemented by local militias and troops that remained under control of the individual states or were otherwise independent. General George Washington was the commander-in-chief of the army throughout the war. Most of the Continental Army was disbanded in 1783 after the Treaty of Paris ended the war. The 1st and 2nd Regiments went on to form the nucleus of the Legion of the United States in 1792 under General Anthony Wayne. This became the foundation of the United States Army in 1796. The Continental Army consisted of soldiers from all 13 colonies, and after 1776, from all 13 states. When the American Revolutionary War began at the Battles of Lexington and Concord on April 19th 1775, the colonial revolutionaries did not have an army. Previously, each colony had relied upon the militia, made up of part-time citizen-soldiers, for local defense, or the raising of temporary "provincial regiments" during specific crises such as the French and Indian War of 1754-1763. As tensions with Great Britain increased in the years leading to the war, colonists began to reform their militias in preparation for the perceived potential conflict. Training of militiamen increased after the passage of the Intolerable Acts in 1774. Colonists such as Richard Henry Lee proposed forming a national militia force, but the First Continental Congress rejected the idea. The minimum enlistment age was 16 years of age, or 15 with parental consent. On April 23, 1775, the Massachusetts Provincial Congress authorized the raising of a colonial army consisting of 26 company regiments. New Hampshire, Rhode Island, and Connecticut soon raised similar but smaller forces. On June 14, 1775, the Second Continental Congress decided to proceed with the establishment of a Continental Army for purposes of common defense, adopting the forces already in place outside Boston (22,000 troops) and New York (5,000). It also raised the first ten companies of Continental troops on a one-year enlistment, riflemen from Pennsylvania, Maryland, Delaware and Virginia to be used as light infantry, who became the 1st Continental Regiment in 1776. On June 15, 1775, the Congress elected by unanimous vote George Washington as Commander-in-Chief, who accepted and served throughout the war without any compensation except for reimbursement of expenses. Four major-generals (Artemas Ward, Charles Lee, Philip Schuyler, and Israel Putnam) and eight brigadier-generals (Seth Pomeroy, Richard Montgomery, David Wooster, William Heath, Joseph Spencer, John Thomas, John Sullivan, and Nathanael Greene) were appointed by the Second Continental Congress in the course of a few days. After Pomeroy did not accept, John Thomas was appointed in his place. As the Continental Congress increasingly adopted the responsibilities and posture of a legislature for a sovereign state, the role of the Continental Army became the subject of considerable debate. Some Americans had a general aversion to maintaining a standing army; but on the other hand the requirements of the war against the British required the discipline and organization of a modern military. As a result, the army went through several distinct phases, characterized by official dissolution and reorganization of units. Soldiers in the Continental Army were citizens who had volunteered to serve in the army (but were paid), and at various times during the war, standard enlistment periods lasted from one to three years. Early in the war the enlistment periods were short, as the Continental Congress feared the possibility of the Continental Army evolving into a permanent army. The army never numbered more than 17,000 men. Turnover proved a constant problem, particularly in the winter of 1776-77, and longer enlistments were approved. Broadly speaking, Continental forces consisted of several successive armies, or establishments: - The Continental Army of 1775, comprising the initial New England Army, organized by Washington into three divisions, six brigades, and 38 regiments. Major General Philip Schuyler's ten regiments in New York were sent to invade Canada. - The Continental Army of 1776, reorganized after the initial enlistment period of the soldiers in the 1775 army had expired. Washington had submitted recommendations to the Continental Congress almost immediately after he had accepted the position of Commander-in-Chief, but the Congress took time to consider and implement these. Despite attempts to broaden the recruiting base beyond New England, the 1776 army remained skewed toward the Northeast both in terms of its composition and of its geographical focus. This army consisted of 36 regiments, most standardized to a single battalion of 768 men strong and formed into eight companies, with a rank-and-file strength of 640. - The Continental Army of 1777-80 evolved out of several critical reforms and political decisions that came about when it became apparent that the British were sending massive forces to put an end to the American Revolution. The Continental Congress passed the "Eighty-eight Battalion Resolve", ordering each state to contribute one-battalion regiments in proportion to their population, and Washington subsequently received authority to raise an additional 16 battalions. Enlistment terms extended to three years or to "the length of the war" to avoid the year-end crises that depleted forces (including the notable near-collapse of the army at the end of 1776, which could have ended the war in a Continental, or American, loss by forfeit). - The Continental Army of 1781-82 saw the greatest crisis on the American side in the war. Congress was bankrupt, making it very difficult to replenish the soldiers whose three-year terms had expired. Popular support for the war reached an all-time low, and Washington had to put down mutinies both in the Pennsylvania Line and in the New Jersey Line. Congress voted to cut funding for the Army, but Washington managed nevertheless to secure important strategic victories. - The Continental Army of 1783-84 was succeeded by the United States Army, which persists to this day. As peace was restored with the British, most of the regiments were disbanded in an orderly fashion, though several had already been diminished. In addition to the Continental Army regulars, local militia units, raised and funded by individual colonies/states, participated in battles throughout the war. Sometimes the militia units operated independently of the Continental Army, but often local militias were called out to support and augment the Continental Army regulars during campaigns. (The militia troops developed a reputation for being prone to premature retreats, a fact that Brigadier-General Daniel Morgan integrated into his strategy at the Battle of Cowpens in 1781.) The financial responsibility for providing pay, food, shelter, clothing, arms, and other equipment to specific units was assigned to states as part of the establishment of these units. States differed in how well they lived up to these obligations. There were constant funding issues and morale problems as the war continued. This led to the army offering low pay, often rotten food, hard work, cold, heat, poor clothing and shelter, harsh discipline, and a high chance of becoming a casualty. At the time of the Siege of Boston, the Continental Army at Cambridge, Massachusetts, in June 1775, is estimated to have numbered from 14-16,000 men from New England (though the actual number may have been as low as 11,000 because of desertions). Until Washington's arrival, it remained under the command of Artemas Ward, while John Thomas acted as executive officer and Richard Gridley commanded the artillery corps and was chief engineer. The British force in Boston was increasing by fresh arrivals. It numbered then about 10,000 men. Major Generals Howe, Clinton, and Burgoyne, had arrived late in May and joined General Gage in forming and executing plans for dispersing the rebels. Feeling strong with these veteran officers and soldiers around him—and the presence of several ships-of-war under Admiral Graves—the governor issued a proclamation, declaring martial law, branding the entire Continental Army and supporters as "rebels" and "parricides of the Constitution." Amnesty was offered to those who gave up their allegiance to the Continental Army and Congress in favor of the British authorities, though Samuel Adams and John Hancock were still wanted for high treason. This proclamation only served to strengthen the resolve of the Congress and Army. After the British evacuation of Boston (prompted by the placement of Continental artillery overlooking the city in March 1776), the Continental Army relocated to New York. For the next five years, the main bodies of the Continental and British armies campaigned against one another in New York, New Jersey, and Pennsylvania. These campaigns included the notable battles of Trenton, Princeton, Brandywine, Germantown, and Morristown, among many others. The Continental Army was racially integrated, a condition the United States Army would not see again until the Korean War. African American slaves were promised freedom in exchange for military service in New England, and made up one fifth of the Northern Continental Army. Throughout its existence, the Army was troubled by poor logistics, inadequate training, short-term enlistments, interstate rivalries, and Congress's inability to compel the states to provide food, money or supplies. In the beginning, soldiers enlisted for a year, largely motivated by patriotism; but as the war dragged on, bounties and other incentives became more commonplace. Two major mutinies late in the war drastically diminished the reliability of two of the main units, and there were constant discipline problems. The army increased its effectiveness and success rate through a series of trials and errors, often at great human cost. General Washington and other distinguished officers were instrumental leaders in preserving unity, learning and adapting, and ensuring discipline throughout the eight years of war. In the winter of 1777-1778, with the addition of Baron von Steuben, of Prussian origin, the training and discipline of the Continental Army began to vastly improve. (This was the infamous winter at Valley Forge.) Washington always viewed the Army as a temporary measure and strove to maintain civilian control of the military, as did the Continental Congress, though there were minor disagreements about how this was carried out. Near the end of the war, the Continental Army was augmented by a French expeditionary force (under General Rochambeau) and a squadron of the French navy (under the Comte de Barras), and in the late summer of 1781 the main body of the army travelled south to Virginia to rendezvous with the French West Indies fleet under Admiral Comte de Grasse. This resulted in the Siege of Yorktown, the decisive Battle of the Chesapeake, and the surrender of the British southern army. This essentially marked the end of the land war in America, although the Continental Army returned to blockade the British northern army in New York until the peace treaty went into effect two years later, and battles took place elsewhere between British forces and those of France and its allies. Planning for the transition to a peacetime force had begun in April 1783 at the request of a congressional committee chaired by Alexander Hamilton. The commander-in-chief discussed the problem with key officers before submitting the army's official views on 2 May. Significantly, there was a broad consensus of the basic framework among the officers. Washington's proposal called for four components: a small regular army, a uniformly trained and organized militia, a system of arsenals, and a military academy to train the army's artillery and engineer officers. He wanted four infantry regiments, each assigned to a specific sector of the frontier, plus an artillery regiment. His proposed regimental organizations followed Continental Army patterns but had a provision for increased strength in the event of war. Washington expected the militia primarily to provide security for the country at the start of a war until the regular army could expand—the same role it had carried out in 1775 and 1776. Steuben and Duportail submitted their own proposals to Congress for consideration. Although Congress declined on 12 May to make a decision on the peace establishment, it did address the need for some troops to remain on duty until the British evacuated New York City and several frontier posts. The delegates told Washington to use men enlisted for fixed terms as temporary garrisons. A detachment of those men from West Point reoccupied New York without incident on November 25. When Steuben's effort in July to negotiate a transfer of frontier forts with Major General Frederick Haldimand collapsed, however, the British maintained control over them, as they would into the 1790s. That failure and the realization that most of the remaining infantrymen's enlistments were due to expire by June 1784 led Washington to order Knox, his choice as the commander of the peacetime army, to discharge all but 500 infantry and 100 artillerymen before winter set in. The former regrouped as Jackson's Continental Regiment under Colonel Henry Jackson of Massachusetts. The single artillery company, New Yorkers under John Doughty, came from remnants of the 2nd Continental Artillery Regiment. Congress issued a proclamation on October 18, 1783 which approved Washington's reductions. On November 2 Washington then released his Farewell Order to the Philadelphia newspapers for nationwide distribution to the furloughed men. In the message he thanked the officers and men for their assistance and reminded them that "the singular interpositions of Providence in our feeble condition were such, as could scarcely escape the attention of the most unobserving; while the unparalleled perseverance of the Armies of the United States, through almost every possible suffering and discouragement for the space of eight long years, was little short of a standing miracle." Washington believed that the blending of persons from every colony into "one patriotic band of Brothers" had been a major accomplishment, and he urged the veterans to continue this devotion in civilian life. Washington said farewell to his remaining officers on December 4 at Fraunces Tavern in New York City. On December 23 he appeared in Congress, then sitting at Annapolis, and returned his commission as commander-in-chief: "Having now finished the work assigned me, I retire from the great theatre of Action; and bidding an Affectionate farewell to this August body under whose orders I have so long acted, I here offer my Commission, and take my leave of all the employments of public life." Congress ended the War of American Independence on January 14, 1784 by ratifying the definitive peace treaty that had been signed in Paris on September 3. Congress had again rejected Washington's concept for a peacetime force in October 1783. When moderate delegates then offered an alternative in April 1784 which scaled the projected army down to 900 men in one artillery and three infantry battalions, Congress rejected it as well, in part because New York feared that men retained from Massachusetts might take sides in a land dispute between the two states. Another proposal to retain 350 men and raise 700 new recruits also failed. On June 2 Congress ordered the discharge of all remaining men except twenty-five caretakers at Fort Pitt and fifty-five at West Point. The next day it created a peace establishment acceptable to all interests. The plan required four states to raise 700 men for one year's service. Congress instructed the Secretary at War to form the troops into eight infantry and two artillery companies. Pennsylvania, with a quota of 260 men, had the power to nominate a lieutenant colonel, who would be the senior officer. New York and Connecticut each were to raise 165 men and nominate a major; the remaining 110 men came from New Jersey. Economy was the watchword of this proposal, for each major served as a company commander, and line officers performed all staff duties except those of chaplain, surgeon, and surgeon's mate. Under Josiah Harmar, the First American Regiment slowly organized and achieved permanent status as an infantry regiment of the new Regular Army. The lineage of the First American Regiment is carried on by the 3rd United States Infantry Regiment (The Old Guard). However the United States military realised it needed a well-trained standing army following St. Clair's Defeat on November 4, 1791, when a force led by General Arthur St. Clair was almost entirely wiped out by the Western Confederacy near Fort Recovery, Ohio. The plans, which were supported by U.S. President George Washington and Henry Knox, Secretary of War, led to the disbandment of the Continental Army and the creation of the Legion of the United States. The command would be based on the 18th-century military works of Henry Bouquet, a professional Swiss soldier who served as a colonel in the British army, and French Marshal Maurice de Saxe. In 1792 Anthony Wayne, a renowned hero of the American Revolutionary War, was encouraged to leave retirement and return to active service as Commander-in-Chief of the Legion with the rank of Major General. The legion was recruited and raised in Pittsburgh, Pennsylvania. It was formed into four sub-legions. These were created from elements of the 1st and 2nd Regiments from the Continental Army. These units then became the First and Second Sub-Legions. The Third and Fourth Sub-Legions were raised from further recruits. From June 1792 to November 1792, the Legion remained cantoned at Fort LaFayette in Pittsburgh. Throughout the winter of 1792-93, existing troops along with new recruits were drilled in military skills, tactics and discipline at Legionville on the banks of the Ohio River near present-day Baden, Pennsylvania. The following Spring the newly named Legion of the United States left Legionville for the Northwest Indian War, a struggle between American Indian tribes affiliated with the Western Confederacy in the area south of the Ohio River. The overwhelmingly successful campaign was concluded with the decisive victory at the Battle of Fallen Timbers on August 20, 1794, Maj. Gen. Anthony Wayne applied the techniques of wilderness operations perfected by Sullivan's 1779 expedition against the Iroquois. The training the troops received at Legionville was also seen as an instrumental to this overwhelming victory. Nevertheless, Steuben's Blue Book remained the official manual for the legion, as well as for the militia of most states, until Winfield Scott in 1835. In 1796, the United States Army was raised following the discontinuation with the legion of the United States. This preceded the graduation of the first cadets from United States Military Academy at West Point, New York, which was established in 1802. "As the Continental Army has unfortunately no uniforms, and consequently many inconveniences must arise from not being able to distinguish the commissioned officers from the privates, it is desired that some badge of distinction be immediately provided; for instance that the field officers may have red or pink colored cockades in their hats, the captains yellow or buff, and the subalterns green." Later on in the war, the Continental Army established its own uniform with a black cockade (as used in much of the British Army) among all ranks and the following insignia: |Ranks and insignia of the Continental Army| |Major general||Brigadier general||Colonel||Lieutenant colonel||Aide-de-camp||Major||Captain||Subaltern||Lieutenant||Ensign||Sergeant Major||Sergeant||Corporal||Private| Jacket with gold trim |Silver epaulets||Gold epaulets Hat with green cockade |Gold epaulets||Gold epaulet |No epaulets||Red epaulets||Red epaulet - Siege of Boston - Battle of Long Island - Battle of Harlem Heights - Battle of Trenton - Battle of Princeton - Battle of Brandywine - Battle of Germantown - Battle of Saratoga - Battle of Monmouth - Siege of Charleston - Battle of Camden - Battle of Cowpens - Battle of Guilford Court House - Siege of Yorktown - Departments of the Continental Army - Continental Navy - Pluckemin Continental Artillery Cantonment Site - History of the United States Army - Peter Francisco, Revolutionary War soldier and hero - Middlebrook encampment in Middlebrook, New Jersey, winter of 1776–77 and winter of 1778–79 - Valley Forge in Valley Forge, Pennsylvania, winter of 1777–78 - Wright, Continental Army, p. 10–11 - Cont'l Cong., Formation of the Continental Army, in 2 Journals of the Continental Congress, 1774–1789 89–90 (Library of Cong. eds., 1905). - Cont'l Cong., Commission for General Washington, in 2 Journals of the Continental Congress, 1774-1789 96-7 (Library of Cong. eds., 1905). - Cont'l Cong., Instructions for General Washington, in 2 Journals of the Continental Congress, 1774-1789 100-1 (Library of Cong. eds., 1905). - Cont'l Cong., Resolution Changing "United Colonies" to "United States", in 5 Journals of the Continental Congress, 1774-1789 747 (Library of Cong. eds., 1905). - Cont'l Cong., Acceptance of Appointment by General Washington, in 2 Journals of the Continental Congress, 1774–1789 91–92 (Library of Cong. eds., 1905). - Cont'l Cong., Commissions for Generals Ward and Lee, in 2 Journals of the Continental Congress, 1774-1789 97 (Library of Cong. eds., 1905). - Cont'l Cong., Commissions for Generals Schuyler and Putnam, in 2 Journals of the Continental Congress, 1774-1789 99 (Library of Cong. eds., 1905). - Cont'l Cong., Commissions for Generals Pomeroy, Montgomery, Wooster, Heath, Spencer, Thomas, Sullivan, and Greene, in 2 Journals of the Continental Congress, 1774-1789 103 (Library of Cong. eds., 1905). - Cont'l Cong., Commission for General Thomas, in 2 Journals of the Continental Congress, 1774-1789 191 (Library of Cong. eds., 1905). - Liberty! The American Revolution (Documentary) Episode II: Blows Must Decide: 1774-1776. Twin Cities Public Television, 1997. ISBN 1-4157-0217-9 - Lengel, Edward G. General George Washington: A Military Life. New York: Random House, 2005. ISBN 1-4000-6081-8. - Royster, Charles. A Revolutionary People at War: The Continental Army and American Character, 1775–1783. Chapel Hill: University of North Carolina Press, 1979. ISBN 0-8078-1385-0. - Carp, E. Wayne. To Starve the Army at Pleasure: Continental Army Administration and American Political Culture, 1775–1783. Chapel Hill: University of North Carolina Press, 1984. ISBN 0-8078-1587-X. - Gillett, Mary C. The Army Medical Department, 1775–1818. Washington: Center of Military History, U.S. Army, 1981. - Martin, James Kirby, and Mark Edward Lender. A Respectable Army: The Military Origins of the Republic, 1763–1789. 2nd ed. Wheeling, Illinois: Harlan Davidson, 2006. ISBN 0-88295-239-0. - Mayer, Holly A. Belonging to the Army: Camp Followers and Community during the American Revolution. Columbia: University of South Carolina Press, 1999. ISBN 1-57003-339-0; ISBN 1-57003-108-8. - Risch, Erna (1981). Supplying Washington's Army. Washington, D.C.: United States Army Center of Military History. - Reference materials - RevWar75.com provides "an online cross-referenced index of all surviving orderly books of the Continental Army". - Wright, Robert K. The Continental Army. Washington, D.C.: United States Army Center of Military History 1983. Available, in part, online from the CMH website - Bibliography of the Continental Army compiled by the United States Army Center of Military History - Primary Sources - Wright, Jr., Robert K.; MacGregor Jr., Morris J. "Resolutions of the Continental Congress Adopting the Continental Army and other Sources from the Revolution". Soldier-Statesmen of the Constitution. E302.5.W85 1987. Washington D.C: United States Army Center of Military History. CMH Pub 71-25.
https://en.wikipedia.org/wiki/Continental_Army
4.21875
Remember Tania and Alex and the garden in the Frequency Tables to Organize and Display Data Concept? Tania had her hands full trying to figure out how many workers were in the garden on which days. Tania has a frequency table, but how can she make a visual display of the data? |# of People Working||Frequency| Using this frequency table, how can Tania make a line plot? A line plot is another display method we can use to organize data. Like a frequency table, it shows how many times each number appears in the data set. Instead of putting the information into a table, however, we graph it on a number line. Line plots are especially useful when the data falls over a large range. Take a look at the data and the line plot below. This data represents the number of students in each class at a local community college. 30, 31, 31, 31, 33, 33, 33, 33, 37, 37, 38, 40, 40, 41, 41, 41 The first thing that we might do is to organize this data into a frequency table. That will let us know how often each number appears. |# of students||Frequency| Now if we look at this data, we can make a couple of conclusions. - The range of students in each class is from 30 to 41. - There aren’t any classes with 32, 34, 35, 36 or 39 students in them. Now that we have a frequency table, we can build a line plot to show this same data. Building the line plot involves counting the number of students and then plotting the information on a number line. We use Notice that even if we didn’t have a class with 32 students in it that we had to include that number on the number line. This is very important. Each value in the range of numbers needs to be represented, even if that value is 0. Now let's use this information to answer a few questions. How many classes have 31 students in them? How many classes have 38 students in them? How many classes have 33 students in them? Now Tania can take the frequency table and make a line plot for the farm. |# of People Working||Frequency| Now, let’s draw a line plot to show the data in another way. Now that we have the visual representations of the data, it is time to draw some conclusions. Remember that Tania and Alex know that there needs to be at least three people working on any given day. By analyzing the data, you can see that there are five days when there are only one or two people working. With the new data, Tania and Alex call a meeting of all of the workers. When they display the data, it is clear why everything isn’t getting done. Together, they are able to figure out which days need more people, and they solve the problem. - how often something occurs - information about something or someone-usually in number form - to look at data and draw conclusions based on patterns or numbers - Frequency table - a table or chart that shows how often something occurs - Line plot - Data that shows frequency by graphing data over a number line - Organized data - Data that is listed in numerical order Here is one for you to try on your own. Jeff counted the number of ducks he saw swimming in the pond each morning on his way to school. Here are his results: 6, 8, 12, 14, 5, 6, 7, 8, 12, 11, 12, 5, 6, 6, 8, 11, 8, 7, 6, 13 Jeff’s data is unorganized. It is not written in numerical order. When we have unorganized data, the first thing that we need to do is to organize it in numerical order. 6, 6, 6, 6, 6, 7, 7, 8, 8, 8, 8, 11, 11, 12, 12, 12, 13, 14 Next, we can make a frequency table. There are two columns in the frequency table. The first is the number of ducks and the second is how many times each number of ducks was on the pond. The second column is the frequency of each number of ducks. |Number of Ducks||Frequency| Now that we have a frequency table, the next step is to make a line plot. Then we will have two ways of examining the same data. Here is a line plot that shows the duck information. Here are some things that we can observe by looking at both methods of displaying data: - In both, the range of numbers is shown. There were between 6 and 14 ducks seen, so each number from 6 to 14 is represented. - There weren’t any days where 9 or 10 ducks were counted, yet both are represented because they fall in the range of ducks counted. - Both methods help us to visually understand data and its meaning. http://www.hstutorials.net/math/preAlg/php/php_12/php_12_01_x13.htm – Solving a problem using frequency tables and line plots. Directions: Here is a line plot that shows how many seals came into the harbor in La Jolla California during an entire month. Use it to answer the following questions. 1. How many times did thirty seals appear on the beach? 2. Which two categories have the same frequency? 3. How many times were 50 or more seals counted on the beach? 4. True or False. This line plot shows us the number of seals that came on each day of the month. 5. True or False. There weren’t any days that less than 30 seals appeared on the beach. 6. How many times were 60 seals on the beach? 7. How many times were 70 seals on the beach? 8. What is the smallest number of seals that was counted on the beach? 9. What is the greatest number of seals that were counted on the beach? 10. Does the frequency table show any number of seals that weren't counted at all? Directions: Organize each list of data. Then create a frequency table to show the results. There are two answers for each question. 11. 8, 8, 2, 2, 2, 2, 2, 5, 6, 3, 3, 4 12. 20, 18, 18, 19, 19, 19, 17, 17, 17, 17, 17 13. 100, 99, 98, 92, 92, 92, 92, 92, 92, 98, 98 14. 75, 75, 75, 70, 70, 70, 70, 71, 72, 72, 72, 74, 74, 74 15. 1, 1, 1, 1, 2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 5, 5
http://www.ck12.org/statistics/Line-Plots-from-Frequency-Tables/lesson/Line-Plots-from-Frequency-Tables/r17/
4.09375
Slowing the spin of the Earth The moon's gravity deforms the Earth, causing bulges in land and sea (tides). As the Earth turns, it rotates those bulges away from the moon, but the moon's gravity pulls back on the bulges. This is called gravitational friction, which slows down the planet's rotation ever so slightly. Days grow longer as the minuscule braking adds up over hundreds of millions of years: How much does the Earth slow down every year? AN ANALOGY TO PUT THINGS INTO PERSPECTIVE: • Imagine that the 2,445-mile distance between Washington and San Francisco represents today's day length of 24 hours. • 200 million years ago, an analagous distance for day length would stretch only from San Francisco to near the West Virginia/Virginia border. • Every 100 years, the length of a day increases by 0.002 seconds — or 3.23 inches farther down the road to Washington. The annual gain is an infinitesimal 0.00002 seconds, which on our analogous cross-country trip would amount to 0.82 millimeters, a little more than the thickness of a thumbnail. Moon's gravity pulls back on bulge displaced by the rotating Earth Axis of bulge Axis of moon's gravity Tidal bulge (greatly exaggerated) Moon's gravity causes tides Distances not to scale
http://apps.washingtonpost.com/g/page/national/slowing-the-spin-of-the-earth/471/
4.28125
Insect Structure and Function For illustrations to accompany this article see Insect Structure and Function The arthropods are a large group of invertebrate animals which include insects, spiders, millipedes, centipedes and crustacea such as lobsters and crabs. All arthropods have a hard exoskeleton or cuticle, segmented bodies and jointed legs. The crustacea and insects also have antennae, compound eyes and, often, three distinct regions to their bodies: head, thorax and abdomen. General Characteristics of Insects The insects differ from the rest of the arthropods in having only three pairs of jointed legs on the thorax and, typically, two pairs of wings. There are a great many different species of insects and some, during evolution, have lost one pair of wings, as in the houseflies, crane flies and mosquitoes. Other parasitic species like the fleas have lost both pairs of wings. In beetles, grasshoppers and cockroaches, the first pair of wings has become modified to form a hard outer covering over the second pair. Cuticle and ecdysis. The value of the external cuticle is thought to lie mainly in reducing the loss from the body of water vapour through evaporation, but it also protects the animal from damage and bacterial invasion, maintains its shape and allows rapid locomotion. The cuticle imposes certain limitations in size, however, for if arthropods were to exceed the size of some of the larger crabs, the cuticle would become too heavy for the muscles to move the limbs. Between the segments of the body and at the joints of the limbs and other appendages, the cuticle is flexible and allows movement. For the most part, however, the cuticle is rigid and prevents any increase in the size of the insect except during certain periods of its development when the insect sheds its cuticle (ecdysis) and increases its volume before the new cuticle has time to harden. Only the outermost layer of the cuticle is shed, the inner layers are digested by enzymes secreted from the epidermis and the fluid so produced is absorbed back into the body. Muscular contractions force the blood into the thorax, causing it to swell and so split the old cuticle along a predetermined line of weakness. The swallowing of air often accompanies ecdysis; assisting the splitting of the cuticle and keeping the body expanded while the new cuticle hardens. In insects, this moulting, or ecdysis, takes place only in the larval and pupal form and not in adults. In other words, mature insects do not grow. Breathing. Running through the bodies of all insects is a branching system of tubes, tracheae which contain air. They open to the outside by pores called spiracles and they conduct air from the atmosphere to all living regions of the body The tracheae are lined with cuticle which is thickened in spiral bands This thickening keeps the tracheae open against the internal pressure of body fluids. The spiracles, typically, open on the flanks of each segment of the body, but in some insects there are only one or two openings. The entrance to the spiracle is usually supplied with muscles which control its opening or closure. Since the spiracles are one of the few areas of the body from which evaporation of water can occur, the closure of the spiracles when the insect is not active and therefore needs less oxygen, helps to conserve moisture. The tracheae branch repeatedly until they terminate in very fine tracheoles which invest or penetrate the tissues and organs inside the body. The walls of tracheae and tracheoles are permeable to gases, and oxygen is able to diffuse through them to reach the living cells. As might be expected the supply of tracheoles is most dense in the region of very active muscle, e.g. the flight muscles in the thorax. The movement of oxygen from the atmosphere, through the spiracles, up the tracheae and tracheoles to the tissues, and the passage of carbon dioxide in the opposite direction, can be accounted for by simple diffusion but in active adult insects there is often a ventilation process which exchanges up to 60 per cent of the air in the tracheal system. In many beetles, locusts, grasshoppers and cockroaches, the abdomen is slightly compressed vertically (dorso-ventrally) by contraction of internal muscles. In bees and wasps the abdomen is compressed rhythmically along its length, slightly telescoping the segments. In both cases, the consequent rise of blood pressure in the body cavity compresses the tracheae along their length (like a concertina) and expels air from them. When the muscles relax, the abdomen springs back into shape, the tracheae expand and draw in air. Thus, unlike mammals, the positive muscular action in breathing is that which results in expiration. This tracheal respiratory system is very different from the respiratory systems of the vertebrates, in which oxygen is absorbed by gills or lungs and conveyed in the blood stream to the tissues. In the insects, the oxygen diffuses through the trachea and tracheoles directly to the organ concerned. The carbon dioxide escapes through the same path although a proportion may diffuse from the body surface. Blood system. The tracheal supply carrying oxygen to the organs gives the circulatory system a rather different role in insects from that in vertebrates. Except where the tracheoles terminate at some distance from a cell, the blood has little need to carry dissolved oxygen and, with a few exceptions, it contains no haemoglobin or cells corresponding to red blood cells. There is a single dorsal vessel which propels blood forward and releases it into the body cavity, thus maintaining a sluggish circulation. Apart from this vessel, the blood is not confined in blood vessels but occupies the free space between the cuticle and the organs in the body cavity. The blood therefore serves mainly to distribute digested food, collect excretory products and, in addition, has important hydraulic functions in expanding certain regions of the body to split the old cuticle and in pumping up the crumpled wings of the newly emerged adult insect. Touch. From the body surface of the insect there arises a profusion of fine bristles most of which have a sensory function, responding principally to touch, vibration, or chemicals. The tactile (touch-sensitive) bristles are jointed at their bases and when a bristle is displaced to one side, it stimulates a sensory cell which fires impulses to the central nervous system. The tactile bristles are numerous on the tarsal segments, the head, wing margins, or antennae according to the species and as well as informing the insect about contact stimuli, they probably respond to air currents and vibrations in the ground or in the air. Proprioceptors. Small oval or circular areas of cuticle are differentially thickened and supplied with sensory fibres. They probably respond to distortions in the cuticle resulting from pressure, and so feed back information to the central nervous system about the position of the limbs. Organs of this kind respond to deflections of the antennae during flight and are thought to "measure" the air speed and help to adjust the wing movements accordingly. In some insects there are stretch receptors associated with muscle fibres, apparently similar to those in vertebrates. Sound. The tactile bristles on the cuticle and on the antennae respond to low-frequency vibrations but many insects have more specialized sound detectors in the form of a thin area of cuticle overlying a distended trachea or air sac and invested with sensory fibres. Such tympanal organs appear on the thorax or abdomen or tibia according to species and are sensitive to sounds of high frequency. They can be used to locate the source of sounds as in the case of the male cricket "homing" on the sound of the female's "chirp", and in some cases can distinguish between sounds of different frequency. Smell and taste. Experiments show that different insects can distinguish between chemicals which we describe as sweet, sour, salt and bitter, and in some cases more specific substances. The organs of taste are most abundant on the mouthparts, in the mouth, and on the tarsal segments but the nature of the sense organs concerned is not always clear. Smell is principally the function of the antennae. Here there are bristles, pegs or plates with a very thin cuticle and fine perforations through which project nerve endings sensitive to chemicals. Sometimes these sense organs are grouped together and sunk into olfactory pits. In certain moths the sense of smell is very highly developed. The male Emperor moth will fly to an unmated female from a distance of a mile, attracted by the "scent" which she exudes. A male moth's antennae may carry many thousand chemo-receptors. Sight. The compound eyes of insects consist of thousands of identical units called ommatidia packed closely together on each side of the head. Each ommatidium consists of a lens system formed partly from a thickening of the transparent cuticle and partly from a special crystalline cone. This lens system concentrates light from within a cone of 20°, on to a transparent rod, the rhabdom. The light, passing down this rhabdom, stimulates the eight or so retinal cells grouped round it to fire nervous impulses to the brain. Each ommatidium can therefore record the presence or absence of light, its intensity, in some cases its colour and, according to the position of the ommatidium in the compound eye, its direction. Although there may be from 2000 to 10,000 or more ommatidia in the compound eye of an actively flying insect, this number cannot reconstruct a very accurate picture of the outside world. Nevertheless, the "mosaic image" so formed, probably produces a crude impression of the form of well-defined objects enabling bees, for example, to seek out flowers and to use landmarks for finding their way to and from the hive. It is likely that the construction of compound eyes makes them particularly sensitive to moving objects, e.g. bees are more readily attracted to flowers which are being blown by the wind. Flower-visiting insects, at least, can distinguish certain colours from shades of grey of equal brightness. Bees are particularly sensitive to blue, violet and ultra-violet but cannot distinguish red and green from black and grey unless the flower petals are reflecting ultra-violet light as well. Some butterflies can distinguish yellow, green and red. The simple eyes of, for example, caterpillars, consist of a cuticular lens with a group of light-sensitive cells beneath, rather like a single ommatidium. They show some colour sensitivity and, when grouped together, some ability to discriminate form. The ocelli which occur in the heads of many flying insects probably respond only to changes in light intensity. Movement in insects depends, as it does in vertebrates, on muscles contracting and pulling on jointed limbs or other appendages. The muscles are within the body and limbs, however, and are attached to the inside of the cuticle. A pair of antagonistic muscles is attached across a joint in a way which could bend and straighten the limb. Many of the joints in the -insect are of the "peg and socket" type. They permit movement in one plane only, like a hinge joint, but since there are several such joints in a limb, each operating in a different direction, the limb as a whole can describe fairly free directional movement. Walking. The characteristic walking pattern of an insect involves moving three legs at a time. The body is supported by a "tripod" of three legs while the other three are swinging forward to a new position. On the last tarsal joint are claws and, depending on the species, adhesive pads which enable the insect to climb very smooth surfaces. The precise mechanism of adhesion is uncertain. Modification of the limbs and their musculature enables insects to leap, e.g. grasshopper, or swim, e.g. water beetles. Flying. In insects with relatively light bodies and large wings such as butterflies and dragonflies, the wing muscles in the thorax pull directly on the wing where it is articulated to the thorax, levering it up and down. Insects such as bees, wasps and flies, with compact bodies and a smaller wing area have indirect flight muscles which elevate and depress the wings very rapidly by pulling on the walls of the thorax and changing its shape. In both cases there are direct flight muscles which, by acting on the wing insertion, can alter its angle in the air. During the downstroke the wing is held horizontally, so thrusting downwards on the air and producing a lifting force. During the upstroke the wing is rotated vertically and offers little resistance during its upward movement through the air. It is not possible to make very useful generalizations about the feeding methods of insects because they are so varied. However, insects do have in common three pairs of appendages called mouth parts, hinged to the head below the mouth and these extract or manipulate food in one way or another. The basic pattern of these mouth parts is the same in most insects but in the course of evolution they have become modified and adapted to exploit different kinds of food source. The least modified are probably those of insects such as caterpillars, grasshoppers, locusts and cockroaches in which the first pair of appendages, mandibles, form sturdy jaws, working sideways across the mouth and cutting off pieces of vegetation which are manipulated into the mouth by the other mouth parts, the maxillae and labium. Aphids are small insects (e.g. greenfly) which feed on plant juices that they suck from leaves and stems. Their mouthparts are greatly elongated to form a piercing and sucking proboscis. The maxillae fit together to form a tube which can be pushed into plant tissues to reach the food-conducting vessels of the phloem and so extract nutrients. The mosquito has mandibles and maxillae in the form of slender, sharp stylets which can cut through the skin of a mammal as well as penetrating plant tissues. To obtain a blood meal the mosquito inserts its mouth parts through the skin to reach a capillary and then sucks blood through a tube formed from the labrum or "front lip" which precedes the mouth parts. Another tubular structure, the hypopharynx, serves to inject into the wound a substance which prevents the blood from clotting and so blocking the tubular labrum. In both aphid and mosquito the labium is rolled round the other mouth parts, enclosing them in a sheath when they are not being used. In the butterfly, only the maxillae contribute to the feeding apparatus. The maxillae are greatly elongated and in the form of half tubes, i.e. like a drinking straw split down its length. They can be fitted together to form a tube through which nectar is sucked from the flowers. The housefly also sucks liquid but its mouthparts cannot penetrate tissue. Instead the labium is enlarged to form a proboscis which terminates in two pads whose surface is channelled by grooves called pseudotracheae. The fly applies its proboscis to the food and pumps saliva along the channels and over the food. The saliva dissolves soluble parts of the food and may contain enzymes which digest some of the insoluble matter. The nutrient liquid is then drawn back along the pseudotracheae and pumped into the alimentary canal. For illustrations to accompany this article see Insect Structure and Function |Search this site| |Search the web| © Copyright 2004 - 2016 D G Mackean & Ian Mackean. All rights reserved.
http://www.biology-resources.com/insect-structure.html
4.5
Animation created by Tom H If you have decided that an experiment is the best approach to testing your hypothesis, then you need to design the experiment. Experimental design refers to how participants are allocated to the different conditions (or IV groups) in an experiment. Probably the commonest way to design an experiment in psychology is to divide the participants into two groups, the experimental group and the control group, and then introduce a change to the experimental group and not the control group. The researcher must decide how he/she will allocate their sample to these IVs. For example, if there are 10 participants, will all 10 participants take part in both conditions (e.g. repeated measures) or will the participants be split in half and take part in only one condition each? Three Types of Experimental Designs are Commonly Used: 1. Independent Measures: Different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants. This should be done by random allocation, which ensures that each participant has an equal chance of being assigned to one group or the other. Independent measures involves using two separate groups of participants; one in each condition. For example: Pro: Avoids order effects (such as practice or fatigue) as people participate in one condition only. If a person is involved in several conditions they may become bored, tired and fed up by the time they come to the second condition, or becoming wise to the requirements of the experiment! Con: More people are needed than with the repeated measures design (i.e. more time consuming). Con: Differences between participants in the groups may affect results, for example; variations in age, sex or social background. These differences are known as participant variables (i.e. a type of extraneous variable). 2. Repeated Measures: The same participants take part in each condition of the independent variable. This means that each condition of the experiment includes the same group of participants. Pro: Fewer people are needed as they take part in all conditions (i.e. saves time) Con: There may be order effects. Order effects refer to the order of the conditions having an effect on the participants’ behavior. Performance in the second condition may be better because the participants know what to do (i.e. practice effect). Or their performance might be worse in the second condition because they are tired (i.e. fatigue effect). Suppose we used a repeated measures design in which all of the participants first learned words in loud noise and then learned it in no noise. We would expect the participants to show better learning in no noise simply because of order effects. To combat order affects the research counter balances the order of the conditions for the participants. Alternating the order in which participants perform in different conditions of an experiment. The sample is split into two groups experimental (A) and control (B). For example, group 1 does ‘A’ then ‘B’, group 2 does ‘B’ then ‘A’ this is to eliminate order effects. Although order effects occur for each participant, because they occur equally in both groups, they balance each other out in the results. 3. Matched Pairs: One pair must be randomly assigned to the experimental group and the other to the control group. Pro: Reduces participant (i.e. extraneous) variables because the researcher has tried to pair up the participants so that each condition has people with similar abilities and characteristics. Pro: Avoids order effects, and so counterbalancing is not necessary. Con: Very time-consuming trying to find closely matched pairs. Con: Impossible to match people exactly, unless identical twins! Experimental Design Summary Experimental design refers to how participants are allocated to the different conditions (or IV groups) in an experiment. There are three types: 1. Independent measures / groups: Different participants are used in each condition of the independent variable. 2. Repeated measures: The same participants take part in each condition of the independent variable. 3. Matched pairs: Each condition uses different participants, but they are matched in terms of certain characteristics, e.g. sex, age, intelligence etc. How to cite this article: McLeod, S. A. (2007). Experimental Design. Retrieved from www.simplypsychology.org/experimental-designs.html
http://www.simplypsychology.org/experimental-designs.html
4.125
In addition to the seemingless infinite number of kanji, or Chinese characters, Japanese uses two sets of phonic characters called hiragana and katakana. During the Heian Period (794-1185), poetry written by aristocratic ladies used kanji (then referred to as Manyogana) to express the Japanese language. Over time, these ladies developed a simpler and more fluid style of writing which became known as onnade (woman's hand) and later as hiragana. This form of writing gained full acceptance in the early 10th century when it was used to write the Imperial anthology of waka (Japanese verse) known as the Kokin Wakashu. Katakana were developed as a way of phonetically writing Chinese Buddhist texts and were standardized in the 10th century. Anthologies of waka were written in katakana from this time. These days, romaji (roman letters) and English words can be seen quite often. Hiragana are cursive characters usually used with kanji to add inflectional endings or other suffixes (such as to conjugate verbs and create adjectives); as a replacement or supplement for kanji which are difficult to read (particularly for children); for grammatical particles and function words; or simply for visual or graphic effect. (See examples below) The non-cursive katakana are used to write loan words from other languages, especially English; to write onomatapoeic words (similar to the use of italics in English); or for visual or graphic effect. (See examples below). The tables below show the hiragana and katakana alphabets and their romanized syllables. In each case, the upper left character is the hiragana and the upper right character is the katakana. There are five basic vowel sounds: a, i, u, e and o, which are pronounced pretty much the same as in Italian or Spanish. The other sounds are formed by combining the vowels with various consonants. Table 1 shows the 46 basic kana forms in use today. Table 2 shows simple compounds formed by adding the kana for 'ya', 'yu' and 'yo' to other kana. Table 3 shows basic kana altered by the addition of two short strokes (to make a voiced consonant, such as 'ga') or a circular stroke (to make an unvoiced p-like bilabial stop, such as 'pa') to the upper right. Table 4 is a combination of Tables 2 and 3. Double consonants, such as in the word 'rokku' (rock music), are written with a small 'tsu' character between the 'ro' and 'ku' characters. In recent years, as more and more loanwords are introduced to Japanese, new kana symbols are being used to represent the pronunciation of such English letters as v (confused with b) and f (confused with h), although they are not official. Some examples of words using hiragana and katakana
http://www.japan-zone.com/new/alphabet.shtml
4.09375
W and Z bosons W and Z bosons are a group of elementary particles. They are bosons, which means that they have a spin of 0 or 1. Both had been found in experiments by the year 1983. Together, they are responsible for a force known as "weak force." Weak force is called weak because it is not as strong as other forces like electromagnetism. There are two W bosons with different charges, the normal W+, and its antiparticle, the W –. Z bosons are their own antiparticle. Naming[change | change source] W bosons are named after the weak force that they are responsible for. Weak force is what physicists believe is responsible for the breaking down of some radioactive elements, in the form of Beta decay. In the late '70s, scientists managed to combine early versions of the weak force with electromagnetism, and called it the electroweak force. Creation of W and Z bosons[change | change source] W and Z bosons are only known to be created under Beta decay, which is a form of radioactive decay. Beta Decay[change | change source] Beta decay occurs when there are a lot of neutrons in an atom. An easy way to think of a neutron is that it is made of one proton and one electron. When there are too many neutrons in one atom nucleus, one neutron will split and form a proton and an electron. The proton will stay where it is, and the electron will be launched out of the atom at incredible speed. This is why Beta radiation is harmful to humans. The above model is not entirely accurate, as both protons and neutrons are each made of three quarks, which are elementary particles. A proton is made of two up quarks (+2/3 charge), and one down quark (-1/3 charge). A neutron is made of one up quark and two down quarks. Because of this, the proton has +1 charge and the neutron 0 charge. The different types of quarks are known in the scientific world as flavours. Weak force is believed to be able to change the flavour of a quark. For example, when it changes a down quark in a neutron into an up quark, the charge of the neutron becomes +1, since it would have the same arrangement of quarks as a proton. The three-quark neutron with a charge of +1 is no longer a neutron after this, as it fulfills all of the requirements to be a proton. Therefore, Beta decay will cause a neutron to become a proton (along with some other end-products). W boson decay[change | change source] When a quark changes flavour, as it does in Beta decay, it releases a W boson. W bosons only last for 3x10-25 seconds, which is why we had not discovered them until less than half a century ago. Surprisingly, W bosons have a mass of about 80 times that of a proton (one proton weighs one atomic mass unit). Keep in mind that the neutron that it came from has almost the same weight as the proton. In the quantum world, it is not an extremely uncommon occurrence for a more massive particle to come from a less massive particle because it lasts less time than Planck's constant. (Planck's constant is simply a convenient number that falls out of the math when calculating this). After the 3x10-25 seconds has passed, a W boson decays into one electron and one neutrino. Since neutrinos rarely interact with matter, we can ignore them from now on. The electron is propelled out of the atom at a high speed. The proton that was produced by the Beta decay stays in the atom nucleus, and raises the atomic number by one. Z boson decay[change | change source] Z bosons are also predicted in the Standard Model of physics, which successfully predicted the existence of W bosons. Z Bosons decay into a fermion and its antiparticle, which are particles such as electrons and quarks which have spin in units of half of the reduced planks constant.
https://simple.wikipedia.org/wiki/W_and_Z_bosons
4.03125
Section 1- Introduction The cold war is the name given to a period of history between 1945 and 1989. During this time the USA and the USSR challenged each other. This was a time of extreme tension and threat of war. The USA and the USSR were known as superpowers as they were far stronger and more powerful than any other countries. Although fear and threats of war were very real and there were times when this seemed imminent, the war never happened. However, the threat was very real. However, both sides knew the extent of war would be great due to nuclear weapons. This meant conflict was always resolved using diplomatic means. The ideological rivalry between the Superpowers is central to understanding the cold war. The ideological preference of the USA was a commitment to democracy and free enterprise leading to a capitalist society. This was characterised by: • free elections, which a choice of parties • democratic freedom- freedom of speech, expression and assembly • free mass media- independent media sources • free enterprise- business, manufacturing, banking etc • Individual rights- the right to vote, to a fair trial etc. The Soviet Union The Russian communist ideology was based on Marxism/Leninism with a commitment to equality. Exemplified by: • A one party state, with only the communist party • A totalitarian system- all aspects of life influenced by communism • An emphasis on equality for all • Strict control of the media- censorship • State control of the means and creation • Suppression of dissenting opinions and opposition-secret police How the cold war ‘fought’ or conducted? The cold war was conducted in a number of ways with conflict always being short. The Arms Race The two sides engaged in ongoing nuclear and conventional arms race from 1945 onwards. Each side tried to develop increasingly powerful and sophisticated weapons. Nuclear weapons were never meant to be used. Both sides had a network of spies and secret agents engaged in intelligence and information. The USA made extensive use of CIA and the USSR used the KGB. Alliances were key to the cold war. This included NATO for the USA and the west and the Warsaw pact for the USSR. Both had well organised command networks and carried out regular drills and exercises. The space race was configured by the cold war. The USSR derived great prestige from the launch of the first space satellite, Sputnik 1 in 1957. The USA succeeded with the moon landings in 1969. USA and the USSR competed against each other in the Olympics. During the cold war both sides offered aid to countries in urgent need of help. This begun with the American Marshall Plan in the late 1940s. USSR began offering aid to independent African and Asian countries. Both sides were trying to influence the recipient. Crisis Management: Tests of leadership for the superpowers During the cold war serious episodes of tension and crisis developed. This increased the risk of armed confrontation and war. In some cases the crisis involved only 1 superpower. E.g. USSR in Hungary in 1956 and Czechoslovakia in 1968. In this instance the USA responded only by diplomatic protest. This was tactical as the USA accepted the USSR was working within the area of responsibility so had little to do with the USA. The war in Vietnam was the boldest episode of the Cold war. The USA intervened in Vietnam in strength and made a major commitment to stop advancing communism. The USSR limited it’s actions to supplying military resources to North Vietnam. The Berlin Crisis in 1962 was potentially very serious with direct confrontation between the USA and USSR. The Cuban Missile Crisis 1962 was the ultimate crisis. This was a direct conflict between the USA and the USSR with war...
http://www.studymode.com/essays/Cold-War-1397750.html
4
The United States Geological Survey (USGS) reports that ice shelves are retreating in the southern section of the Antarctic Peninsula due to climate change. The disappearing ice could lead to sea-level rise if warming continues, threatening coastal communities and low-lying islands worldwide. Every ice front in the southern part of the Antarctic Peninsula has been retreating overall from 1947 to 2009, according to the USGS, with the most dramatic changes occurring since 1990. Previously documented evidence indicates that the majority of ice fronts on the entire Peninsula have also retreated during the late 20th century and into the early 21st century. The ice shelves are attached to the continent and already floating, holding in place the Antarctic ice sheet that covers about 98 percent of the Antarctic continent. As the ice shelves break off, it is easier for outlet glaciers and ice streams from the ice sheet to flow into the sea. The transition of that ice from land to the ocean is what raises sea level. The Peninsula is one of Antarctica's most rapidly changing areas because it is farthest away from the South Pole, and its ice shelf loss may be a forecast of changes in other parts of Antarctica and the world if warming continues. Retreat along the southern part of the Peninsula is of particular interest because that area has the Peninsula's coolest temperatures, demonstrating that global warming is affecting the entire length of the Peninsula. The Antarctic Peninsula's southern section as described in this study contains five major ice shelves: Wilkins, George VI, Bach, Stange and the southern portion of Larsen Ice Shelf. The ice lost since 1998 from the Wilkins Ice Shelf alone totals more than 4,000 square kilometers, an area larger than the state of Rhode Island. "This research is part of a larger ongoing USGS project that is for the first time studying the entire Antarctic coastline in detail, and this is important because the Antarctic ice sheet contains 91 percent of Earth's glacier ice," said USGS scientist Jane Ferrigno. "The loss of ice shelves is evidence of the effects of global warming. We need to be alert and continually understand and observe how our climate system is changing." Citation: Ferrigno et al., 'Coastal-Change and Glaciological Map of the Palmer Land Area, Antarctica: 1947—2009', USGS, February 2010 - PHYSICAL SCIENCES - EARTH SCIENCES - LIFE SCIENCES - SOCIAL SCIENCES Subscribe to the newsletter Stay in touch with the scientific world! Know Science And Want To Write? - Top Secret: On Confidentiality On Scientific Issues, Across The Ring And Across The Bedroom - Would New Planet X Clear Its Orbit? - And Any Better Name Than "Planet Nine"? - Naomi Oreskes And Denialism About The Scientific Consensus On GMOs And Nuclear Energy - Drug Prevents Key Age-related Brain Change In Rats - A New Alternative To Sodium: Fish Sauce - A Conservative Argument For Genetic Modification Of Embryos - Smoking Bans Reduce Risk Of Cardiovascular Disease In Non-Smokers - "So there is no why like Bob Fletcher or as some people say you can already see it on Russian news..." - "Hi Joe, yes the thing is - all that is fine, it's logical from your point of view. And whatever..." - " Like I asked David Brin: Who are the ones who are actually insane? Certainly it is NOT the skeptics..." - "https://www.youtube.com/watch?v=nVyV4L072jY So then what is going on in this video? Also what is..." - "Just curious, When was the last time you (the author) generated a mathematical model? On what?..." - Florida Declares Zika Virus State of Emergency - Indonesia’s Many Human Physical Deformities: A Closer Look - Spinal ‘Column’: Love for Hunchback Dog, Breakthrough for 8-Yr-Old Girl - BMI is Bologna - Energy Drinks: The Dose Makes the Poison - California’s Prop 65: Bad For Public Acceptance Of Science, About To Get Worse - Cambridge researcher develops smartphone app to map Swiss-German dialects - Studies link healthy workforces to positive stock market performance - Pioneering discovery leads to potential preventive treatment for sudden cardiac death - Online shopping might not be as green as we thought - Gene family turns cancer cells into aggressive stem cells that keep growing
http://www.science20.com/news_articles/ice_shelves_retreating_antarctic_peninsula
4.03125
Primary succession is one of two types of biological and ecological succession of plant life, occurring in an environment in which new substrate devoid of vegetation and other organisms usually lacking soil, such as a lava flow or area left from retreated glacier, is deposited. In other words, it is the gradual growth of an ecosystem over a longer period. In contrast, secondary succession occurs on substrate that previously supported vegetation before an ecological disturbance from smaller things like floods, hurricanes, tornadoes, and fires which destroyed the plant life. In primary succession pioneer species like lichen, algae and fungi as well as other abiotic factors like wind and water start to "normalize" the habitat. Primary succession begins on rock formations, such as volcanoes or mountains, or in a place with no organisms or soil. This creates conditions nearer optimum for vascular plant growth; pedogenesis or the formation of soil is the most important process. These pioneer plants are then dominated and often replaced by plants better adapted to less odd conditions, these plants include vascular plants like grasses and some shrubs that are able to live in thin soils that are often mineral based. For example spores of lichen or fungus, being the pioneer species, are spread onto a land of rocks. Then, the rocks are broken down into smaller pieces and organic matter gradually accumulates, favouring the growth of larger plants like grasses, ferns and herbs. These plants further improve the habitat and help the adaptation of larger vascular plants like shrubs, or even medium- or large-sized trees. More animals are then attracted to the place and finally a climax community is reached. A good example of primary succession takes place after a volcano has erupted. The lava flows into the ocean and hardens into new land. The resulting barren land is first colonized by pioneer plants which pave the way for later, less hardy plants, such as hardwood trees, by facilitating pedogenesis, especially through the biotic acceleration of weathering and the addition of organic debris to the surface regolith. An example of primary succession is the island of Surtsey, which is an island formed in 1963 after a volcanic eruption from beneath the sea. Surtsey is off the South coast of Iceland and is being monitored to observe primary succession in progress. About thirty species of plant had become established by 2008 and more species continue to arrive, at a typical rate of roughly 2–5 new species per year. - "Biology Online Dictionary". Biology Online. Retrieved 12 October 2011. - Walker, Lawrence R.; del Moral, Roger. "Primary Succession". Encyclopedia of Life Sciences. doi:10.1002/9780470015902.a0003181.pub2. Retrieved 9 December 2015. - Baldocchi, Dennis. "Ecosystem Succession: Who/What is Where and When" (PDF). Biomet Lab, University of California Berkley. Retrieved 9 December 2015. - The volcano island: Surtsey, Iceland: Plants, Our Beautiful World, retrieved 2016-02-02
https://en.wikipedia.org/wiki/Primary_succession
4
NBC Learn, Windows to the Universe Note: you may need to scroll down the Changing Planet video page to get to this video. Video length: 6:21 min.Learn more about Teaching Climate Literacy and Energy Awareness» See how this Video supports the Next Generation Science Standards» Middle School: 6 Disciplinary Core Ideas High School: 3 Disciplinary Core Ideas About Teaching Climate Literacy Other materials addressing 5b Notes From Our Reviewers The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness. Read what our review team had to say about this resource below or learn more about how CLEAN reviews teaching materials Teaching Tips | Science | Pedagogy | - Note: you may need to scroll down the Changing Planet video page to get to this video. - The video can be enlarged to eliminate visual impact of the text and images surrounding the video. About the Science - The video documents impacts of thermal expansion and melting of ice sheets and glaciers on coastal communities - The video also shows the ways that sea level rise is documented. These include the use of sediment core data and satellite data to document rate of sea level rise in the past. - The video also describes a laboratory model that examines how warming ocean currents reach Antarctica to contributing to the melting of ice sheets. - References to projected sea level rise in the coming decades is a highly dynamic field. Estimates change as research progresses and teachers need to be aware of this. - Comments from expert scientist: Includes interviews with two leading scientists in the field and provides visuals of the sources of the data (sediment cores, satellite measurements, physical models, computer models). About the Pedagogy - This video can be embedded in a lesson or activity that explores issues of ocean circulation, the melting of continental ice sheets, and how they impact sea level rise. Next Generation Science Standards See how this Video supports: Disciplinary Core Ideas: 6 MS-ESS2.C1:Water continually cycles among land, ocean, and atmosphere via transpiration, evaporation, condensation and crystallization, and precipitation, as well as downhill flows on land. MS-ESS2.C2:The complex patterns of the changes and the movement of water in the atmosphere, determined by winds, landforms, and ocean temperatures and currents, are major determinants of local weather patterns. MS-ESS2.C3:Global movements of water and its changes in form are propelled by sunlight and gravity. MS-ESS2.C4:Variations in density due to variations in temperature and salinity drive a global pattern of interconnected ocean currents. MS-ESS2.D1:Weather and climate are influenced by interactions involving sunlight, the ocean, the atmosphere, ice, landforms, and living things. These interactions vary with latitude, altitude, and local and regional geography, all of which can affect oceanic and atmospheric flow patterns. MS-ESS3.D1:Human activities, such as the release of greenhouse gases from burning fossil fuels, are major factors in the current rise in Earth’s mean surface temperature (global warming). Reducing the level of climate change and reducing human vulnerability to whatever climate changes do occur depend on the understanding of climate science, engineering capabilities, and other kinds of knowledge, such as understanding of human behavior and on applying that knowledge wisely in decisions and activities. Disciplinary Core Ideas: 3 HS-ESS2.C1:The abundance of liquid water on Earth’s surface and its unique combination of physical and chemical properties are central to the planet’s dynamics. These properties include water’s exceptional capacity to absorb, store, and release large amounts of energy, transmit sunlight, expand upon freezing, dissolve and transport materials, and lower the viscosities and melting points of rocks. HS-ESS2.D1:The foundation for Earth’s global climate systems is the electromagnetic radiation from the sun, as well as its reflection, absorption, storage, and redistribution among the atmosphere, ocean, and land systems, and this energy’s re-radiation into space. HS-ESS3.D1:Though the magnitudes of human impacts are greater than they have ever been, so too are human abilities to model, predict, and manage current and future impacts.
http://cleanet.org/resources/42953.html
4.125
May 21, 2008 Phoenix Mission Science & Technology Mars is a cold desert planet with no liquid water on its surface. But in the Martian arctic, water ice lurks just below ground level. Discoveries made by the Mars Odyssey Orbiter in 2002 show large amounts of subsurface water ice in the northern arctic plain. The Phoenix lander targets this circumpolar region using a robotic arm to dig through the protective top soil layer to the water ice below and ultimately, to bring both soil and water ice to the lander platform for sophisticated scientific analysis.The complement of the Phoenix spacecraft and its scientific instruments are ideally suited to uncover clues to the geologic history and biological potential of the Martian arctic. Phoenix will be the first mission to return data from either polar region providing an important contribution to the overall Mars science strategy "Follow the Water" and will be instrumental in achieving the four science goals of NASA's long-term Mars Exploration Program. Determine whether Life ever arose on Mars Characterize the Climate of Mars Characterize the Geology of Mars Prepare for Human Exploration The Phoenix Mission has two bold objectives to support these goals, which are to (1) study the history of water in the Martian arctic and (2) search for evidence of a habitable zone and assess the biological potential of the ice-soil boundary Objective 1: Study the History of Water in All its Phases Currently, water on Mars' surface and atmosphere exists in two states: gas and solid. At the poles, the interaction between the solid water ice at and just below the surface and the gaseous water vapor in the atmosphere is believed to be critical to the weather and climate of Mars. Phoenix will be the first mission to collect meteorological data in the Martian arctic needed by scientists to accurately model Mars' past climate and predict future weather processes. Liquid water does not currently exist on the surface of Mars, but evidence from Mars Global Surveyor, Odyssey and Exploration Rover missions suggest that water once flowed in canyons and persisted in shallow lakes billions of years ago. However, Phoenix will probe the history of liquid water that may have existed in the arctic as recently as 100,000 years ago. Scientists will better understand the history of the Martian arctic after analyzing the chemistry and mineralogy of the soil and ice using robust instruments. Objective 2: Search for Evidence of Habitable Zone and Assess the Biological Potential of the Ice-Soil Boundary Recent discoveries have shown that life can exist in the most extreme conditions. Indeed, it is possible that bacterial spores can lie dormant in bitterly cold, dry, and airless conditions for millions of years and become activated once conditions become favorable. Such dormant microbial colonies may exist in the Martian arctic, where due to the periodic wobbling of the planet, liquid water may exist for brief periods about every 100,000 years making the soil environment habitable. Phoenix will assess the habitability of the Martian northern environment by using sophisticated chemical experiments to assess the soil's composition of life giving elements such as carbon, nitrogen, phosphorus, and hydrogen. Identified by chemical analysis, Phoenix will also look at reduction-oxidation (redox) molecular pairs that may determine whether the potential chemical energy of the soil can sustain life, as well as other soil properties critical to determine habitability such as pH and saltiness. Despite having the proper ingredients to sustain life, the Martian soil may also contain hazards that prevent biological growth, such as powerful oxidants that break apart organic molecules. Powerful oxidants that can break apart organic molecules are expected in dry environments bathed in UV light, such as the surface of Mars. But a few inches below the surface, the soil could protect organisms from the harmful solar radiation. Phoenix will dig deep enough into the soil to analyze the soil environment potentially protected from UV looking for organic signatures and potential habitability. NASA Science Goals Phoenix seeks to verify the presence of the Martian Holy Grail: water and habitable conditions. In doing so, the mission strongly complements the four goals of NASA's Mars Exploration Program. Goal 1: Determine whether life ever arose on Mars Continuing the Viking missions' quest, but in an environment known to be water-rich, Phoenix searches for signatures of life at the soil-ice interface just below the Martian surface. Phoenix will land in the artic plains, where its robotic arm will dig through the dry soil to reach the ice layer, bring the soil and ice samples to the lander platform, and analyze these samples using advanced scientific instruments. These samples may hold the key to understanding whether the Martian arctic is a habitable zone where microbes could grow and reproduce during moist conditions. Goal 2: Characterize the climate of Mars Phoenix will land during the retreat of the Martian polar cap, when cold soil is first exposed to sunlight after a long winter. The interaction between the ground surface and the Martian atmosphere that occurs at this time is critical to understanding the present and past climate of Mars. To gather data about this interaction and other surface meteorological conditions, Phoenix will provide the first weather station in the Martian polar region, with no others currently planned. Data from this station will have a significant impact in improving global climate models of Mars. Goal 3: Characterize the geology of Mars As on Earth, the past history of water is written below the surface because liquid water changes the soil chemistry and mineralogy in definite ways. Phoenix will use a suite of chemistry experiments to thoroughly analyze the soil's chemistry and mineralogy. Some scientists speculate the landing site for Phoenix may have been a deep ocean in the planet's distant past leaving evidence of sedimentation. If fine sediments of mud and silt are found at the site, it may support the hypothesis of an ancient ocean. Alternatively, coarse sediments of sand might indicate past flowing water, especially if these grains are rounded and well sorted. Using the first true microscope on Mars, Phoenix will examine the structure of these grains to better answer these questions about water's influence on the geology of Mars. Goal 4: Prepare for human exploration The Phoenix Mission will provide evidence of water ice and assess the soil chemistry in Martian arctic. Water will be a critical resource to future human explorers and Phoenix may provide appreciable information on how water may be acquired on the planet. Understanding the soil chemistry will provide understanding of the potential resources available for human explorers to the northern plains. Phoenix's Robotic Arm (RA) is the single most crucial element to making scientific measurements. The robotic arm combines strength and finesse to dig trenches, scrape water ice, and precisely deliver samples to other instruments on the science deck. Also, the robotic arm carries a camera and thermal-electric probe to make measurements directly in the trench. The following table shows the relationships between Phoenix's science objectives, the scientific measurements to be made, and the instruments that will make these measurements. SSI = Surface Stereo Imager RAC = Robotic Arm Camera MARDI = Mars Descent Imager TEGA = Thermal and Evolved Gas Analyzer MECA = Microscopy, Electrochemistry, and Conductivity Analyzer WC = Wet Chemistry Experiment M = Microscopy, including the Optical Microscope and the Atomic Force Microscope TECP = Thermal and Electrical Conductivity Probe MET = Meteorological Station
http://www.redorbit.com/news/space/1395925/phoenix_mission_science__technology/
4.03125
From anyplace on Earth, the clearest thing in the night sky is usually the moon, Earth's only natural satellite and the nearest celestial object (240,250 miles or 384,400 km away). Ancient cultures revered the moon. It represented gods and goddesses in various mythologies -- the ancient Greeks called it "Artemis" and "Selene," while the Romans referred to it as "Luna." When early astronomers looked at the moon, they saw dark spots that they believed were seas (maria) and lighter regions that they believed was land (terrae). Aristotle's view, which was the accepted theory at the time, was that the moon was a perfect sphere and that the Earth was the center of the universe. When Galileo looked at the moon with a telescope, he saw a different image -- a rugged terrain of mountains and craters. He saw how its appearance changed during the month and how the mountains cast shadows that allowed him to calculate their height. Galileo concluded that the moon was much like Earth in that it had mountains, valleys and plains. His observations ultimately contributed to the rejection of Aristotle's ideas and the Earth-centered universe model. Because the moon is so close to the Earth relative to other celestial objects, it's the only one to which humans have traveled and set foot upon. In the 1960s, the United States and Russia were involved in a massive "space race" to land men on the moon. Both countries sent unmanned probes to orbit the moon, photograph it and land on the surface. In July 1969, American astronauts Neil Armstrong and Edwin "Buzz" Aldrin became the first humans to walk on the moon. During six lunar landing missions from 1969 to 1972, a total of 12 American astronauts explored the lunar surface. They made observations, took photographs, set up scientific instruments and brought back 842 pounds (382 kilograms) of moon rocks and dust samples. What did we learn about the moon from these historic journeys? Let's take a closer look at the moon. We'll examine its surface features and learn about its geology, internal structure, phases, formation and influence on the Earth. What's on the surface of the moon? As we mentioned, the first thing that you'll notice when you look at the moon's surface are the dark and light areas. The dark areas are called maria. There are several prominent maria. - Mare Tranquilitatis (Sea of Tranquility): where the first astronauts landed - Mare Imbrium (Sea of Showers): the largest mare (700 miles or 1100 kilometers in diameter) - Mare Serenitatis (Sea of Serenity) - Mare Nubium (Sea of Clouds) - Mare Nectaris (Sea of Nectar) - Oceanus Procellarum (Ocean of Storms) The maria cover only 15 percent of the lunar surface. The remainder of the lunar surface consists of the bright highlands, or terrae. Highlands are rough, mountainous, heavily cratered regions. The Apollo astronauts observed that the highlands are generally about 4 to 5 km (2.5 to 3 miles) above the average lunar surface elevation, while the maria are low-lying plains about 2 to 3 km (1.2 to 1.8 miles) below average elevation. These results were confirmed in the 1990s, when the orbiting Clementine probe extensively mapped the lunar surface. The moon is littered with craters, which are formed when meteors hit its surface. They may have central peaks and terraced walls, and material from the impact (ejecta) can be thrown from the crater, forming rays that emanate from it. Craters come in many sizes, and you'll see that the highlands are more densely cratered than the maria. Another type of impact structure is a multi-ringed basin. These structures were caused by huge impacts that sent shockwaves outward and pushed up mountain ranges. The Orientale Basin is an example of a multi-ringed basin. Besides craters, geologists have noticed cinder cone volcanoes, rilles (channel-like depressions, probably from lava), lava tubes and old lava flows, which indicate that the moon was volcanically active at some point. The moon has no true soil because it has no living matter in it. Instead, the "soil" is called regolith. Astronauts noted that the regolith was a fine powder of rock fragments and volcanic glass particles interspersed with larger rocks. Upon examining the rocks brought back from the lunar surface, geologists found the following characteristics: - The maria consisted primarily of basalt, an igneous rock derived from cooled lava. - The highland regions include mostly igneous rocks called anorthosite and breccia - If you compare the relative ages of the rocks, the highland areas are much older than the maria. (4 to 4.3 billion years old versus 3.1 to 3.8 billion years old). - The lunar rocks have very little water and volatile compounds in them (as if they've been baked) and resemble those found in the Earth's mantle. - The oxygen isotopes in moon rocks and the Earth are similar, which indicates that the moon and the Earth formed at about the same distance from the sun. - The density of the moon (3.3 g/cm3) is less than that of the Earth (5.5 g/cm3), which indicates that it doesn't have a substantial iron core. Astronauts placed other scientific packages on the moon to collect data: - Seismometers didn't detect any moonquakes or other indications of plate tectonic activity (movements in the moon's crust) - Magnetometers in orbiting spacecraft and probes didn't detect a significant magnetic field around the moon, which indicates that the moon doesn't have a substantial iron core or molten iron core like the Earth does. Let's look at what all of this information tells us about the formation of the moon. Giant Impactor Hypothesis At the time of Project Apollo in the 1960s, there were basically three hypotheses about how the moon formed. - Double planet (also called the condensation hypothesis): The moon and the Earth formed together at about the same time. - Capture: The Earth's gravity captured the fully formed moon as it wandered by. - Fission: The young Earth spun so rapidly on its axis that a blob of molten Earth spun off and formed the moon. But based on the findings of Apollo and some scientific reasoning, none of these hypotheses worked very well. - If the moon did form alongside the Earth, the composition of the two bodies should be about the same (they aren't). - The Earth's gravity isn't sufficient to capture something the size of the moon and keep it in orbit. - The Earth can't spin fast enough for a blob of material the size of the moon to just spin off. Because none of these hypotheses was satisfactory, scientists looked for another explanation. In the mid-1970s, scientists proposed a new idea called the Giant Impactor (or Ejected Ring) hypothesis. According to this hypothesis, about 4.45 billion years ago, while the Earth was still forming, a large object (about the size of Mars) hit the Earth at an angle. The impact threw debris into space from the Earth's mantle region and overlying crust. The impactor itself melted and merged with the Earth's interior, and the hot debris coalesced to form the moon. The Giant Impactor hypothesis explains why the moon rocks have a composition similar to the Earth's mantle, why the moon has no iron core (because the iron in the Earth's core and impactor's core remained on Earth), and why moon rocks seem to have been baked and have no volatile compounds. Computer simulations have shown that this hypothesis is feasible. Geologic History of the Moon Based on analyses of the rocks, crater densities and surface features, geologists came up with the following geologic history of the moon: - After the impact (about 4.45 billion years ago), the newly formed moon had a huge magma ocean over a solid interior. - As the magma cooled, iron and magnesium silicates crystallized and sank to the bottom. Plagioclase feldspar crystallized and floated up to form the anorthosite lunar crust. - Later (about 4 billion years ago), magma rose and infiltrated the lunar crust, where they reacted chemically to form the basalt. The magma ocean continued to cool, forming the lithosphere (which is like the material in the Earth's mantle). As the moon lost heat, the asthenosphere (the next layer in) shrank toward the core and the lithosphere became very large. These events led to a model of the moon's interior that is very different from that of the Earth. Lunar Behavior The moon is thought to influence our daily life and moods, possibly even causing odd behavior. In fact, it's the inspiration for the word "lunatic." Werewolf aficionados, of course, know that a full moon triggers terrifying transformations. And hospital and emergency personnel tell of more crimes, accidents and births during a full moon -- but the evidence for this is mostly anecdotal rather than statistical. - From about 4.6 to 3.9 billion years ago, the moon was intensely bombarded by meteors and other large objects. These impacts modified the lunar crust and gave rise to the large, densely cratered surface in the lunar highlands. Some of these bombardments produced large, multi-ringed basins and mountains. - When the bombardment ceased, lava flowed from the inside of the moon through volcanoes and cracks in the crust. This lava filled the maria and cooled to become the mare basalts. This period of lunar volcanism lasted from about 3.7 billion years to 2.5 billion years ago. Much of the moon's heat was lost during this period. (Because the moon's crust is slightly thinner on the side that faces the Earth, lava could erupt more easily to fill the maria basins. This explains why there are more maria on the near side of the moon compared to the far side.) - Once the volcanic period ended, most of the moon's internal heat was gone, so there was no major geologic activity -- meteor impacts have been the only major geologic factor at work on the moon. These impacts have not been as intense as in earlier periods of the moon's history; bombardments have generally been declining throughout the solar system. However, the meteoric bombardment that continues today has produced some large craters on the maria (like Tycho and Copernicus) and the fine regolith (soil) that covers the lunar surface. Let's look at some of the phenomena involving the moon's orbit. Every night, the moon shows a different face in the night sky. On some nights we can see its entire face, sometimes it's partial, and on others it isn't visible at all. These phases of the moon aren't random -- they change throughout the month in a regular and predictable way. As the moon travels in its 29-day orbit, its position changes daily. Sometimes it's between the Earth and the sun and sometimes it's behind us. So a different section of the moon's face is lit up by the sun, causing it to show different phases. Over the billions of years of the moon's existence, it has moved farther away from the Earth, and its rate of rotation has also slowed. The moon is tidally locked with the Earth, which means that the Earth's gravity "drags" the moon to rotate on its axis. This is why the moon rotates only once per month and why the same side of the moon always faces the Earth. Every day, the Earth experiences tides, or changes in the level of its oceans. They're caused by the pull of the moon's gravity. There are two high tides and two low tides every day, each lasting about six hours. The moon's gravitational force pulls on water in the oceans and stretches the water out to form tidal bulges in the ocean on the sides of the planet that are in line with the moon. The moon pulls water on the side nearest it, which causes a bulge toward the moon. The moon pulls on the Earth slightly, which drags the Earth away from the water on the opposite side, making another tidal bulge there. So, the areas of the Earth under the bulge experience high tide, while the areas on the thin sides have low tide. As the Earth rotates underneath the elongated bulges, this creates high and low tides about 12 hours apart. The moon also stabilizes the Earth's rotation. As the Earth spins on its axis, it wobbles. The moon's gravitational effect limits the wobble to a small degree. If we had no moon, the Earth might move almost 90 degrees off its axis, with the same motion that a spinning top has as it slows down. Return to the Moon Since 1972, no one has set foot on the moon. However, there is a renewed effort for a lunar return. Why? In 1994, the orbiting Clementine probe detected radio reflections from shadowed craters at the moon's South Pole. The reflections were consistent with the presence of ice. Later, the orbiting Lunar Prospector probe detected hydrogen-rich signals form the same area, possibly hydrogen from ice. Where could water on the moon have come from? It was probably carried to the moon by the comets, asteroids and meteors that have impacted the moon over its long history. Water was never detected by the Apollo astronauts because they didn't explore that region of the moon. If there is indeed water on the moon, it could be used to support a lunar base. The water could be split by electrolysis into hydrogen and oxygen -- the oxygen could be used to support life and both gases could be used for rocket fuel. So, a lunar base could be a staging point for future exploration of the solar system (Mars and beyond). Plus, because of the moon's lower gravity, it is cheaper and easier to lift a rocket off of its surface than from Earth. It might tricky to get back there though, at least for U.S. astronauts. In 2010, President Barack Obama decided to cancel the Constellation program, the intent of which was to get Americans back on the moon by 2020. That means U.S. astronauts may have to hitch a ride with private space companies, which will receive some funding from NASA. Other countries, including Japan and China, are planning to travel to the moon and researching how to build a lunar base using materials from the lunar surface. Various plans call for heading to the moon and establishing possible bases between 2015 and 2035. To learn more about the moon, take a look at the links on the next page. Man has always been fascinated by the world beyond our own, making astronomy one of the oldest sciences. Test your astro knowledge with our quiz. Related HowStuffWorks Articles - Moon Quiz - What if we lived on the moon? - Where did the moon come from? - What causes high tide and low tide? - Why is NASA playing with marbles? - Could I see a flashlight beam from Earth on the moon? - Is it possible to see (with a telescope) the things left behind on the moon? - How Lunar Landings Work - How Telescopes Work - How NASA Works - How Solar Eclipses Work - How the Orion CEV Will Work More Great Links - Chaisson, E, McMillan, S, "Astronomy Today." Prentice Hall, Upper Saddle River, 2002. - Exploring the moon, Lunar Prospector Lesson Plans. http://lunar.arc.nasa.gov/education/lesson.htm - Harland, D.M. "Exploring the moon: the Apollo Expeditions." Springer-Verlag, New York, 1999. - Kaufmann, W.J. "Universe (4th Edition)." WH Freeman & Co., New York, 1994. - Lunar and Planetary Institute, Lunar Science and Exploration. http://www.lpi.usra.edu/lunar/ - Lunar Prospector Home. http://lunar.arc.nasa.gov/ - NASA Aerospace Scholars, Lunar Base Designs. http://aerospacescholars.jsc.nasa.gov/HAS/cirr/em/6/8.cfm - NASA Aerospace Scholars, Lunar Geology. http://aerospacescholars.jsc.nasa.gov/HAS/cirr/em/6/2.cfm - NASA Aerospace Scholars, Mining and Manufacturing on the moon. http://aerospacescholars.jsc.nasa.gov/HAS/cirr/em/6/6.cfm - NASA History Office, Apollo Lunar Surface Journal. http://history.nasa.gov/alsj/frame.html - NASA Lunar Prospector Activity, Lunar Landform Identification. http://lunar.arc.nasa.gov/education/activities/active13a.htm - NASA moon Lithograph. http://lunar.gsfc.nasa.gov/images/outreach/62217 main_moon_Lithograph.pdf - NASA Solar System Exploration, Earth's moon. http://solarsystem.nasa.gov/planets/profile.cfm?Object=moon - Planetary Science Institute, The Origin of the moon. http://www.psi.edu/projects/moon/moon.html - Taylor, G.J. "A New moon for the Twenty-First Century." http://www.psrd.hawaii.edu/Aug00/newmoon.html - Taylor, G.J. Gateway to the Solar System, Lunar Prospector Teacher's Guide. http://lunar.arc.nasa.gov/education/teacher/index.htm - Taylor, G.J. Origin of the Earth and moon. http://solarsystem.nasa.gov/scitech/display.cfm?ST_ID=446 - Wilhelms, D.E. "To a Rocky moon: a Geologist's History of Lunar Exploration." University of Arizona Press, Tucson, 1993.
http://science.howstuffworks.com/moon1.htm/printable
4.375
Learn to use the order of operations to evaluate numerical expressions. Numerical Expression Evaluation with Basic Operations Interactive Learn new vocabulary words and help remember them by coming up with your own sentences with the new words using a Stop and Jot table. Develop understanding of concepts by studying them in a relational manner. Analyze and refine the concept by summarizing the main idea, creating visual aids, and generating questions and comments using a Four Square Concept Matrix. Discover how order of operations matters not only in math but also in everyday tasks such as laundry.
http://www.ck12.org/algebra/Numerical-Expression-Evaluation-with-Basic-Operations/
4.28125
How to connect linear equations, tables of values, and their graphs. How to graph a line. How to write a function to describe a table of values. How to write an equation to describe a set of pictures. How to determine what form an equation is written in. Short, helpful video on ACT Geometry by top ACT prep instructor, Devorah. Videos are produced by leading online education provider, Brightstorm. How we identify the behavior of a polynomial graph near an x-intercept. How to label the roots of a quadratic polynomial, solutions to a quadratic equation, and x-intercepts or roots of a quadratic function. Equations and slopes of horizontal and vertical lines How to determine the derivative of a linear function. How we identify the equation of a polynomial function when we are given the intercepts of its graph. How to graph a quadratic equation by hand. How to calculate and interpret the discriminant of a quadratic equation. How to graph the reciprocal of a linear function. How to find the angle of inclination of a line. How to tell if two variables vary directly. How to prove that an angle inscribed in a semicircle is a right angle; how to solve for arcs and angles formed by a chord drawn to a point of tangency. How to identify the graph of a stretched cosine curve.
https://www.brightstorm.com/tag/slope-intercept/page/2
4.0625
It was recently reported that scientists were able to overcome some of the problems with data degradation caused by computing in a quantum environment, and now Nature reports that physicists were able to build the first-ever working quantum network. Though, the the fibre optic network is in its infancy, as researchers reported a mere .2 percent accuracy in data that had been transferred. Still, the experiment has proven that quantum networks are possible. A quantum computer makes direct use of quantum mechanical phenomena to perform operations on data, and could be able to solve specific problems much faster than any traditional, transistor-based computer – The problem with quantum computing has been errors in computation. A classical computer understands data as bits, which can either have a values of 1 or 0. Qubits on the other hand, can have a value of 1, 0 or both simultaneously, which is known as superposition, and allows quantum computers to conduct millions of calculations at once. But there are errors, known as quantum decoherence, caused by things like heat, electromagnetic radiation and defective materials. German physicists from the Max Planck Institute of Quantum Optics built the network, which bounces single, data-carrying rubidium atoms through optical fiber, while emitting one proton. The proton in turn maintains the polarization state of the rubidium atom, or, it’s supposed to, hence the resulting .2 % data transmission accuracy – quantum computing relies on the coordinated motion of atomic particles. It’s been difficult keeping protons aligned in a singular environment. Once researchers have this step sorted out, it is at least proven that a quantum network can exist.
http://www.webpronews.com/german-physicists-build-first-quantum-network-2012-04/
4.1875
When The Earth Moved Nicholas Copernicus Changed The World The Father of Modern Astronomy February 2003 marked the 460th anniversary of the publication of De Revolutionibus Orbium Coelestium, (On the Revolution of Heavenly Spheres), a manuscript that changed the world. Written by the Polish astronomer Nicholas Copernicus and printed in 1543, De Revolutionibus established, for the first time in history, the correct position of the sun among the planets. The book’s findings not only formed the base for astronomers of the future, it inaugurated the great era of theoretical formulation. It is rightfully considered by some to have caused the greatest revolution in science and thought in the last two thousand years. Copernicus put an end to the belief that the earth was the center of the universe, and degraded the earth to a relatively unimportant tributary of the sun. The sun, said Copernicus, was the center of the planetary system, and instead of being stationary, the earth revolved around the sun in the course of a year while rotating once every twenty-four hours about its axis. The book, therefore, also challenged the long-standing belief that the earth was the heavenly center of the universe. The repercussions of this interpretation were magnificent. Copernicus himself originally gave credit to Aristarchus of Samos when he wrote, "Philolaus believed in the mobility of the earth , and some even say that Aristarchus of Samos was of that opinion." Interestingly, this passage was crossed out shortly before publication, maybe because Copernicus decided his treatise would stand on its own merit. 16th Century Renaissance Man The Renaissance resulted in great achievements in esthetic and literary interests, but advances in science moved slowly during the period. The era, however, opened a door for individuals—like Copernicus—to express beliefs they found contrary to what was accepted. Their views most often placed these great thinkers at odds with the Church. Fearful of being labeled heretics, many kept their ideas to themselves or within a close circle of friends. Embracing the views of alleged heretics also placed one out of favor with the church, making support for radical ideas hard to come by. Decades after Copernicus’ death, the great astronomer Galileo was forced to disavow his Copernican beliefs to avoid excommunication. The philosopher Giodano Bruno, a Dominican friar greatly influenced by Copernicus, was hunted by the Inquisition and perished in Rome at the stake. He Moved the Earth Until Copernicus, the teachings of the Greek astronomer Ptolemy were considered the gospel truth. Ptolemy, who lived in Alexandria in the second century after Christ, taught that the earth was round and calculated its circumference at an astonishingly close approximation to the true figure. The Ptolemaic system, however, taught that the earth was the stationary center of the universe, and the sun, moon, planets and stars revolved around it. Because the Ptolemaic system enjoyed the endorsement of not only scholars, but also of the church, Copernicus, in fear of trial for heresy, long hesitated to announce his heliocentric view. The fear instilled by the Church made it understandable why Copernicus’ teachings were not greatly noticed at first, and filtered very slowly into the European consciousness. This chilly reception of the correction of an established erroneous theory proved that scientific investigation was a threat to authority. It was hard to imagine a church Canon a threat to authority, especially since he was dead by the time his teachings served as inspiration to other astronomers and scientists. Initiated Great Thinking Because his presentation of De Revolutionibus lacked both observational data and mathematical underpinning (at that time, the gathering of meager data was not part of the scientific system, nor was the practice of justifying laws with countless mathematical proofs)—yet with models and proofs so convincing—it sparked future astronomers and mathematicians to justify his findings, and in effect served as an catalyst for the great inventions and theories in centuries to come. In putting out his theory of the ordered movement of the planets around the sun, Copernicus stimulated investigation into the whole body of phenomena connected with matter in motion. These researches, conducted by many scholars, among them Kepler, Galileo, and Newton and the most shining names, culminated in the theory of gravitation and the recognition of an eternally established, majestic universe of law. Hand-in-hand with these brilliant physcio-astronomical discoveries went the development of mathematics. Mathematics reached its eighteenth century culmination with the invention of the calculus by Newton and Leibnitz. It was calculus that made possible the complicated measurements demanded by the study of moving objects and it was in mathematical terms that the laws of motion—not only of solid bodies, but also of such physical phenomena as sound, heat, and light—were seated. Nicholas Copernicus (Mikolaj Kopernik) was born in Torun on February 19, 1473 of a well-to-do merchant family. He attended St. John’s School on Torun. He studied canon law at the University of Krakow from 1491 to 1495, and from 1496 to 1503, he studied at the Universities of Bologna and Padua. At the university of Krakow, then famous for its mathematics and astronomy, he discovered several contradictions in the system then used for calculating the movements of celestial bodies. At the University of Bologna, he advanced his theory that the moon was a satellite of the earth. At Padua, he studied medicine. He became fascinated by celestial motion and observed this phenomena with his naked eye. He then began drawing the positions of the constellations and planets to support his theory. His uncle Lucas, the Bishop of Varmia, appointed Copernicus a canon of the Church, which provided Copernicus a stipend to study medicine and science. He held the position as a canon of the Chapter of Varmia in Frombork, a little town in the north of Poland, from 1510 until his death in 1543. There he led a busy administrative life which included the organization of armed resistance against provocations by neighboring Teutonic Knights. His position allowed him to spend most of his time working out his theory. He made astronomical observations using very simple wooden instruments with no lenses (lenses were not invented until 100 years later). About 1515, he earnestly began to compile data and he wrote a short report on his theory which he circulated among astronomers. The first words of the text supplied the title, Commentariolus (Commentary). It took him many years to give the final form to his principal work on the detailed theory of motions in a heliocentric system. In 1539, he published De Revolutionibus. The book was dedicated to Pope Paul III. The published theory reached him on his death bed, although some accounts say he never saw the printed work. It is believed he died several hours after seeing the printed copy. The citizens of Torun, his birthplace, erected a monument in front of the city hall with the following dedication: "Nicholas Copernicus, A Torunian Moved the Earth; Stopped the Sun." In 1945, the Nicholas Copernicus University was organized in Torun. In 1973, the 500th anniversary of his birth, was aptly observed by all higher institutions of learning, astronomical observatories, historians, mathematicians, scientists, and biographers. Musical compositions were inspired by his life, and seventy nations throughout the world issued commemorative postage stamps honoring the Polish genius. Copernicus is buried in Frombork Castle. Forever in Stone Throughout the world, there are many monuments, observatories, and buildings named after the famous astronomer. Here are few of the more notable ones: • In Torun, Poland, his birthplace, a University named in his honor was organized in 1945. A monument to Copernicus stands in front of the town hall. • A statue in front of the College of Physics in Planty, Poland shows Copernicus as a young student. • A statue of Copernicus by the Danish sculptor Bertel Thorvaldsen, stands in downtown Warsaw. • The Copernicus Foundation of Chicago was established in 1971. In 1980, the foundation renovated an old theater (complete with a replica of the Royal Castle Clock Tower in Warsaw) and opened The Copernicus Cultural and Civic Center. • Also in Chicago is one of the most readily recognizable statues of Copernicus. It is located on Solidarity Drive. • An orbiting space laboratory named after Copernicus is on display at the Air and Space Museum at Independence and 6th St. in Washington, D.C. • At the main entrance to the Dag Hammarskjold Library at the United Nations Building, is a large bronze head of Copernicus. It was sculpted by Alfons Karny, and was a gift to the U.N. from Poland in 1970. Karney is one of the world’s greatest sculptures of the 20th century. The bust is on permanent display. • The Kopernik Polish Cultural Center, located in the Polish Community Center in Utica, N.Y., is operated by the Kopernik Polish Cultural Center Committee of the Kopernik Memorial Association of Central New York. It contains a fine permanent collection of Polish works of art, books, artifacts and videotape. • In 1973, Central New York’s Polonia formed the Kopernik Society to commemorate the 500th anniversary of the birth of Copernicus. The society raised money to build the Kopernik Observatory in Vestal, New York, the only one built in the 20th century without support from major donors and government funding. The Kopernik Observatory is next to the planetarium complex at the Roberson Center, making the site to be one of the best public astronomy facilities in the Northwest. Currently, the Kopernik Space Education Center project is underway to expand the original facility. • A memorial to Kopernik stands in Fairmont Park in Philadelphia. It was erected under the auspices of the Philadelphia Polish Heritage Society. • The Copernicus Society of America (CSA), established by Edward Piszek, has played a major role in the promotion of Polish heritage in the United States. In 1977, Piszek and the Copernicus Society were instrumental in enabling Fort Ticonderoga to purchase Mount Defiance, near the historic Revolutionary War fortress at which Tadeusz Kosciuszko played a critical role in 1777 to halt the British advance up Lake Champlain. Fort Ticonderoga was the "location" for filming a PBS special on Kosciuszko, a bicentennial project sponsored by the Copernicus Society and the Reader’s Digest Foundation. Most recently, the Copernicus Society was the motivating force behind W.S. Kuniczak’s translation of the Henryk Sienkiewicz "Trilogy." • A copy of the Warsaw, Poland statue of the Polish astronomer is in the square by the Dow Planetarium in Montreal, Canada. Copernicus In Lore Like many true heroes, there are many folk tales about Copernicus. One, recently retold in Polish Folk Legends by Florence Waszkelewicz-Clowes, tells of a meeting between Copernicus and the legendary magicians Dr. George Faust and Pan Twardowski. Whether or not these real men ever met in the context of the story is speculation, but the tale of their meeting at Pukier Tavern is a popular legend in Poland. The first edition copy of Copernicus’ "De Revolutionibus Orbium Coelestium" is on permanent display on the main floor of the Library of Congress in Washington, D.C. Sources and Suggested Reading A History of Europe, Harcourt, Brace and Company, New York Polish Biographical Dictionary, Bolchazy-Carducci Publishers, Chicago Polish Heritage Travel Guide to the U.S. and Canada, Hippocrene Books, New York © 2016 POLISH AMERICAN JOURNAL P.O. BOX 271, NORTH BOSTON, NY 14110-0271 (716) 312-8088 | Toll Free (800) 422-1275 HOME |SUBSCRIBE | CONTACT US | BOOKSTORE | NEWS ADVERTISE | ON-LINE LIBRARY | STAFF E-MAIL | POLKA NEWS
http://www.polamjournal.com/Library/Biographies/copernicus/copernicus.html
4.03125
The Fermi level is the total chemical potential for electrons (or electrochemical potential for electrons) and is usually denoted by µ or EF. The Fermi level of a body is a thermodynamic quantity, and its significance is the thermodynamic work required to add one electron to the body (not counting the work required to remove the electron from wherever it came from). A precise understanding of the Fermi level—how it relates to electronic band structure in determining electronic properties, how it relates to the voltage and flow of charge in an electronic circuit—is essential to an understanding of solid-state physics. In a band structure picture, the Fermi level can be considered to be a hypothetical energy level of an electron, such that at thermodynamic equilibrium this energy level would have a 50% probability of being occupied at any given time. The Fermi level does not necessarily correspond to an actual energy level (in an insulator the Fermi level lies in the band gap), nor does it require the existence of a band structure. Nonetheless, the Fermi level is a precisely defined thermodynamic quantity, and differences in Fermi level can be measured simply with a voltmeter. - 1 The Fermi level and voltage - 2 The Fermi level and band structure - 3 The Fermi level and temperature out of equilibrium - 4 Technicalities - 5 Footnotes and references The Fermi level and voltage Sometimes it is said that electric currents are driven by differences in electrostatic potential (Galvani potential), but this is not exactly true. As a counterexample, multi-material devices such as p–n junctions contain internal electrostatic potential differences at equilibrium, yet without any accompanying net current; if a voltmeter is attached to the junction, one simply measures zero volts. Clearly, the electrostatic potential is not the only factor influencing the flow of charge in a material—Pauli repulsion, carrier concentration gradients, and thermal effects also play an important role. In fact, the quantity called "voltage" as measured in an electronic circuit has a simple relationship to the chemical potential for electrons (Fermi level). When the leads of a voltmeter are attached to two points in a circuit, the displayed voltage is a measure of the total work that can be obtained, per unit charge, by allowing a tiny amount of charge to flow from one point to the other. If a simple wire is connected between two points of differing voltage (forming a short circuit), current will flow from positive to negative voltage, converting the available work into heat. The Fermi level of a body expresses the work required to add an electron to it, or equally the work obtained by removing an electron. Therefore, the observed difference (VA-VB) in voltage between two points "A" and "B" in an electronic circuit is exactly related to the corresponding chemical potential difference (µA-µB) in Fermi level by the formula where -e is the electron charge. From the above discussion it can be seen that electrons will move from a body of high µ (low voltage) to low µ (high voltage) if a simple path is provided. This flow of electrons will cause the lower µ to increase (due to charging or other repulsion effects) and likewise cause the higher µ to decrease. Eventually, µ will settle down to the same value in both bodies. This leads to an important fact regarding the equilibrium (off) state of an electronic circuit: - An electronic circuit in thermodynamic equilibrium will have a constant Fermi level throughout its connected parts. This also means that the voltage (measured with a voltmeter) between any two points will be zero, at equilibrium. Note that thermodynamic equilibrium here requires that the circuit be internally connected and not contain any batteries or other power sources, nor any variations in temperature. The Fermi level and band structure In the band theory of solids, electrons are considered to occupy a series of bands composed of single-particle energy eigenstates each labelled by ϵ. Although this single particle picture is an approximation, it greatly simplifies the understanding of electronic behaviour and it generally provides correct results when applied correctly. The Fermi–Dirac distribution gives the probability that (at thermodynamic equilibrium) an electron will occupy a state having energy ϵ. Alternatively, it gives the average number of electrons that will occupy that state given the restriction imposed by the Pauli exclusion principle: The location of µ within a material's band structure is important in determining the electrical behaviour of the material. - In an insulator, µ lies within a large band gap, far away from any states that are able to carry current. - In a metal, semimetal or degenerate semiconductor, µ lies within a delocalized band. A large number of states nearby µ are thermally active and readily carry current. - In an intrinsic or lightly doped semiconductor, µ is close enough to a band edge that there are a dilute number of thermally excited carriers residing near that band edge. In semiconductors and semimetals the position of µ relative to the band structure can usually be controlled to a significant degree by doping or gating. These controls do not change µ which is fixed by the electrodes, but rather they cause the entire band structure to shift up and down (sometimes also changing the band structure's shape). For further information about the Fermi levels of semiconductors, see (for example) Sze. Local conduction band referencing, internal chemical potential, and the parameter ζ If the symbol ℰ is used to denote an electron energy level measured relative to the energy of the edge of its enclosing band, ϵC, then in general we have ℰ = ϵ – ϵC, and in particular we can define the parameter ζ by referencing the Fermi level to the band edge: It follows that the Fermi–Dirac distribution function can also be written The band theory of metals was initially developed by Sommerfeld, from 1927 onwards, who paid great attention to the underlying thermodynamics and statistical mechanics. Confusingly, in some contexts the band-referenced quantity ζ may be called the "Fermi level", "chemical potential" or "electrochemical potential", leading to ambiguity with the globally-referenced Fermi level. In this article the terms "conduction-band referenced Fermi level" or "internal chemical potential" are used to refer to ζ. ζ is directly related to the number of active charge carriers as well as their typical kinetic energy, and hence it is directly involved in determining the local properties of the material (such as electrical conductivity). For this reason it is common to focus on the value of ζ when concentrating on the properties of electrons in a single, homogeneous conductive material. By analogy to the energy states of a free electron, the ℰ of a state is the kinetic energy of that state and ϵC is its potential energy. With this in mind, the parameter ζ could also be labelled the "Fermi kinetic energy". Unlike µ, the parameter ζ is not a constant at equilibrium, but rather varies from location to location in a material due to variations in ϵC, which is determined by factors such as material quality and impurities/dopants. Near the surface of a semiconductor or semimetal, ζ can be strongly controlled by externally applied electric fields, as is done in a field effect transistor. In a multi-band material, ζ may even take on multiple values in a single location. For example, in a piece of aluminum metal there are two conduction bands crossing the Fermi level (even more bands in other materials); each band has a different edge energy ϵC and a different value of ζ. The Fermi level and temperature out of equilibrium The Fermi level μ and temperature T are well defined constants for a solid-state device in thermodynamic equilibrium situation, such as when it is sitting on the shelf doing nothing. When the device is brought out of equilibrium and put into use, then strictly speaking the Fermi level and temperature are no longer well defined. Fortunately, it is often possible to define a quasi-Fermi level and quasi-temperature for a given location, that accurately describe the occupation of states in terms of a thermal distribution. The device is said to be in 'quasi-equilibrium' when and where such a description is possible. The quasi-equilibrium approach allows one to build a simple picture of some non-equilibrium effects as the electrical conductivity of a piece of metal (as resulting from a gradient in μ) or its thermal conductivity (as resulting from a gradient in T). The quasi-μ and quasi-T can vary (or not exist at all) in any non-equilibrium situation, such as: - If the system contains a chemical imbalance (as in a battery). - If the system is exposed to changing electromagnetic fields. (as in capacitors, inductors, and transformers). - Under illumination from a light-source with a different temperature, such as the sun (as in solar cells), - When the temperature is not constant within the device (as in thermocouples), - When the device has been altered, but has not had enough time to re-equilibrate (as in piezoelectric or pyroelectric substances). In some situations, such as immediately after a material experiences a high-energy laser pulse, the electron distribution cannot be described by any thermal distribution. One cannot define the quasi-Fermi level or quasi-temperature in this case; the electrons are simply said to be "non-thermalized". In less dramatic situations, such as in a solar cell under constant illumination, a quasi-equilibrium description may be possible but requiring the assignment of distinct values of μ and T to different bands (conduction band vs. valence band). Even then, the values of μ and T may jump discontinuously across a material interface (e.g., p–n junction) when a current is being driven, and be ill-defined at the interface itself. The term "Fermi level" is mainly used in discussing the solid state physics of electrons in semiconductors, and a precise usage of this term is necessary to describe band diagrams in devices comprising different materials with different levels of doping. In these contexts, however, one may also see Fermi level used imprecisely to refer to the band-referenced Fermi level µ-ϵC, called ζ above. It is common to see scientists and engineers refer to "controlling", "pinning", or "tuning" the Fermi level inside a conductor, when they are in fact describing changes in ϵC due to doping or the field effect. In fact, thermodynamic equilibrium guarantees that the Fermi level in a conductor is always fixed to be exactly equal to the Fermi level of the electrodes; only the band structure (not the Fermi level) can be changed by doping or the field effect (see also band diagram). A similar ambiguity exists between the terms "chemical potential" and "electrochemical potential". It is also important to note that Fermi level is not necessarily the same thing as Fermi energy. In the wider context of quantum mechanics, the term Fermi energy usually refers to the maximum kinetic energy of a fermion in an idealized non-interacting, disorder free, zero temperature Fermi gas. This concept is very theoretical (there is no such thing as a non-interacting Fermi gas, and zero temperature is impossible to achieve). However, it finds some use in approximately describing white dwarfs, neutron stars, atomic nuclei, and electrons in a metal. On the other hand, in the fields of semiconductor physics and engineering, "Fermi energy" often is used to refer to the Fermi level described in this article. Fermi level referencing and the location of zero Fermi level Much like the choice of origin in a coordinate system, the zero point of energy can be defined arbitrarily. Observable phenomena only depend on energy differences. When comparing distinct bodies, however, it is important that they all be consistent in their choice of the location of zero energy, or else nonsensical results will be obtained. It can therefore be helpful to explicitly name a common point to ensure that different components are in agreement. On the other hand, if a reference point is inherently ambiguous (such as "the vacuum", see below) it will instead cause more problems. A practical and well-justified choice of common point is a bulky, physical conductor, such as the electrical ground or earth. Such a conductor can be considered to be in a good thermodynamic equilibrium and so its µ is well defined. It provides a reservoir of charge, so that large numbers of electrons may be added or removed without incurring charging effects. It also has the advantage of being accessible, so that the Fermi level of any other object can be measured simply with a voltmeter. Why it is not advisable to use "the energy in vacuum" as a reference zero In principle, one might consider using the state of a stationary electron in the vacuum as a reference point for energies. This approach is not advisable unless one is careful to define exactly where "the vacuum" is. The problem is that not all points in the vacuum are equivalent. At thermodynamic equilibrium, it is typical for electrical potential differences of order 1 V to exist in the vacuum (Volta potentials). The source of this vacuum potential variation is the variation in work function between the different conducting materials exposed to vacuum. Just outside a conductor, the electrostatic potential depends sensitively on the material, as well as which surface is selected (its crystal orientation, contamination, and other details). The parameter that gives the best approximation to universality is the Earth-referenced Fermi level suggested above. This also has the advantage that it can be measured with a voltmeter. Discrete charging effects in small systems In cases where the "charging effects" due to a single electron are non-negligible, the above definitions should be clarified. For example, consider a capacitor made of two identical parallel-plates. If the capacitor is uncharged, the Fermi level is the same on both sides, so one might think that it should take no energy to move an electron from one plate to the other. But when the electron has been moved, the capacitor has become (slightly) charged, so this does take a slight amount of energy. In a normal capacitor, this is negligible, but in a nano-scale capacitor it can be more important. In this case one must be precise about the thermodynamic definition of the chemical potential as well as the state of the device: is it electrically isolated, or is it connected to an electrode? - When the body is able to exchange electrons and energy with an electrode (reservoir), it is described by the grand canonical ensemble. The value of chemical potential µ can be said to be fixed by the electrode, and the number of electrons N on the body may fluctuate. In this case, the chemical potential of a body is the infinitesimal amount of work needed to increase the average number of electrons by an infinitesimal amount (even though the number of electrons at any time is an integer, the average number varies continuously.): - If the number of electrons in the body is fixed (but the body is still thermally connected to a heat bath), then it is in the canonical ensemble. We can define a "chemical potential" in this case literally as the work required to add one electron to a body that already has exactly N electrons, where F(N, T) is the free energy function of the canonical ensemble, or alternatively as the work obtained by removing an electron from that body, These chemical potentials are not equivalent, µ ≠ µ' ≠ µ'', except in the thermodynamic limit. The distinction is important in small systems such as those showing Coulomb blockade. The parameter µ (i.e., in the case where the number of electrons is allowed to fluctuate) remains exactly related to the voltmeter voltage, even in small systems. To be precise, then, the Fermi level is defined not by a deterministic charging event by one electron charge, but rather a statistical charging event by an infinitesimal fraction of an electron. Footnotes and references - Kittel, Charles. Introduction to Solid State Physics, 7th Edition. Wiley. - I. Riess, What does a voltmeter measure? Solid State Ionics 95, 327 (1197) - Sah, Chih-Tang (1991). Fundamentals of Solid-State Electronics. World Scientific. p. 404. ISBN 9810206372. - Datta, Supriyo (2005). Quantum Transport: Atom to Transistor. Cambridge University Presss. p. 7. ISBN 9780521631457. - Kittel, Charles; Herbert Kroemer (1980-01-15). Thermal Physics (2nd Edition). W. H. Freeman. p. 357. ISBN 978-0-7167-1088-2. - Sze, S. M. (1964). Physics of Semiconductor Devices. Wiley. ISBN 0-471-05661-8. - Sommerfeld, Arnold (1964). Thermodynamics and Statistical Mechanics. Academic Press. - "3D Fermi Surface Site". Phys.ufl.edu. 1998-05-27. Retrieved 2013-04-22. - For example: D. Chattopadhyay (2006). Electronics (fundamentals And Applications). ISBN 978-81-224-1780-7. and Balkanski and Wallis (2000-09-01). Semiconductor Physics and Applications. ISBN 978-0-19-851740-5. - Technically, it is possible to consider the vacuum to be an insulator and in fact its Fermi level is defined if its surroundings are in equilibrium. Typically however the Fermi level is two to five electron volts below the vacuum electrostatic potential energy, depending on the work function of the nearby vacuum wall material. Only at high temperatures will the equilibrium vacuum be populated with a significant number of electrons (this is the basis of thermionic emission). - Shegelski, Mark R. A. (May 2004). "The chemical potential of an ideal intrinsic semiconductor". American Journal of Physics 72 (5): 676–678. Bibcode:2004AmJPh..72..676S. doi:10.1119/1.1629090. - Beenakker, C. W. J. (1991). "Theory of Coulomb-blockade oscillations in the conductance of a quantum dot". Physical Review B 44 (4): 1646. Bibcode:1991PhRvB..44.1646B. doi:10.1103/PhysRevB.44.1646.
https://en.wikipedia.org/wiki/Fermi_level
4
When two sets of data are strongly linked together we say they have a High Correlation. The word Correlation is made of Co- (meaning "together"), and Relation - Correlation is Positive when the values increase together, and - Correlation is Negative when one value decreases as the other increases Here we look at linear correlations (correlations that follow a line). Correlation can have a value: - 1 is a perfect positive correlation - 0 is no correlation (the values don't seem linked at all) - -1 is a perfect negative correlation The value shows how good the correlation is (not how steep the line is), and if it is positive or negative. Example: Ice Cream Sales The local ice cream shop keeps track of how much ice cream they sell versus the temperature on that day, here are their figures for the last 12 days: |Ice Cream Sales vs Temperature| |Temperature °C||Ice Cream Sales| And here is the same data as a Scatter Plot: We can easily see that warmer weather leads to more sales, the relationship is good but not perfect. In fact the correlation is 0.9575 ... see at the end how I calculated it. Correlation Is Not Good at Curves The correlation calculation only works well for relationships that follow a straight line. Our Ice Cream Example: there has been a heat wave! It gets so hot that people aren't going near the shop, and sales start dropping. Here is the latest graph: The correlation value is now 0: "No Correlation" ... ! The calculated correlation value is 0 (I worked it out), which means "no correlation". But we can see the data does have a correlation: it follows a nice curve that reaches a peak around 25° C. But the linear correlation calculation is not "smart" enough to see this. Moral of the story: make a Scatter Plot, and look at it! You may see a correlation that the calculation does not. Correlation Is Not Causation "Correlation Is Not Causation" ... which says that a correlation does not mean that one thing causes the other (there could be other reasons the data has a good correlation). Example: Sunglasses vs Ice Cream Our Ice Cream shop finds how many sunglasses were sold by a big store for each day and compares them to their ice cream sales: The correlation between Sunglasses and Ice Cream sales is high Does this mean that sunglasses make people want ice cream? Example: A Real Case! A few years ago a survey of employees found a strong positive correlation between "Studying an external course" and Sick Days. Does this mean: - Studying makes them sick? - Sick people study a lot? - Or did they lie about being sick to study more? Without further research we can't be sure why. How To Calculate How did I calculate the value 0.9575 at the top? I used "Pearson's Correlation". There is software that can calculate it, such as the CORREL() function in Excel or LibreOffice Calc ... ... but here is how to calculate it yourself: Let us call the two sets of data "x" and "y" (in our case Temperature is x and Ice Cream Sales is y): - Step 1: Find the mean of x, and the mean of y - Step 2: Subtract the mean of x from every x value (call them "a"), do the same for y (call them "b") - Step 3: Calculate: a × b, a2 and b2 for every value - Step 4: Sum up a × b, sum up a2 and sum up b2 - Step 5: Divide the sum of a × b by the square root of [(sum of a2) × (sum of b2)] Here is how I calculated the first Ice Cream example (values rounded to 1 or 0 decimal places): As a formula it is: - Σ is Sigma, the symbol for "sum up" - is each x-value minus the mean of x (called "a" above) - is each y-value minus the mean of y (called "b" above) You probably won't have to calculate it like that, but at least you know it is not "magic", but simply a routine set of calculations. Note for Programmers You can calculate it in one pass through the data. Just sum up x, y, x2, y2 and xy (no need for a or b calculations above) then use the formula: There are other ways to calculate a correlation coefficient, such as "Spearman's rank correlation coefficient", but I prefer using a spreadsheet like above.
http://www.mathsisfun.com/data/correlation.html
4.15625
Scientists can probe the composition of clouds and atmospheric pollution with lasers using a process known as light detection and ranging (LiDAR). By measuring the amount of light reflected back by the atmosphere, it is possible to calculate the concentrations of fine drops of noxious chemicals such as nitrous oxide, sulfur dioxide and ozone. More detailed information regarding the size of the liquid droplets could lead to better understanding of the pollutants' movements, but such data is harder to come by. To that end, findings published in the July 15 issue of Physical Review Letters could prove helpful. Researchers report that extremely short laser pulses can generate an intense plasma within a miniscule water droplet, causing light to be reflected preferentially back toward the laser source. Liquid droplets of different sizes focus the laser pulse to varying degrees, producing distinctive wavelengths of emitted light that could provide important clues to aerosol size distribution. Jean-Pierre Wolf of the University of Lyon 1 in France and his colleagues flashed femtosecond laser pulses on individual water droplets that were less than 70 microns in diameter. The team found that the light reflected by a microscopic sphere back toward the laser source was 35 times more intense than the light sent in any other direction. The researchers posit that this phenomenon arises because of a nanosized plasma that forms within the water drop and is hot enough to emit in the visible spectrum. Though further tests are required to determine how the technique applies to situations that involve more than one liquid drop, the researchers suggest that ultrashort high-intensity laser pulses may enhance light-detection and ranging (LiDAR) signals. "The backward-enhanced plasma emission spectrum from water droplets or biological agents," they write, "could be attractive for remotely determining the composition of atmospheric aerosol."
http://www.scientificamerican.com/article/plasma-in-water-droplets/
4.15625
Back To CourseBiology 101: Intro to Biology 24 chapters | 226 lessons The word symbiosis literally means 'living together,' but when we use the word symbiosis in biology, what we're really talking about is a close, long-term interaction between two different species. There are many different types of symbiotic relationships that occur in nature. In many cases, both species benefit from the interaction. This type of symbiosis is called mutualism. An example of mutualism is the relationship between bullhorn acacia trees and certain species of ants. Each bullhorn acacia tree is home to a colony of stinging ants. True to its name, the tree has very large thorns that look like bull's horns. The ants hollow out the thorns and use them as shelter. In addition to providing shelter, the acacia tree also provides the ants with two food sources. One food source is a very sweet nectar that oozes from the tree at specialized structures called nectaries. The second food source is in the form of food nodules called beltian bodies that grow on the tips of the leaves. Between the nectar and the beltian bodies, the ants have all of the food they need. So, the ants get food and shelter, but what does the tree get? Quite a lot actually, you see the ants are very territorial and aggressive. They will attack anything and everything that touches the tree - from grasshoppers and caterpillars to deer and humans. They will even climb onto neighboring trees that touch their tree and kill the whole branch and clear all vegetation in a perimeter around their tree's trunk, as well. The ants protect the tree from herbivores and remove competing vegetation, so the acacia gains a big advantage from the relationship. In this case, the acacia is considered a host because it is the larger organism in a symbiotic relationship upon or inside of which the smaller organism lives, and the ant is considered to be a symbiont, which is the term for the smaller organism in a symbiotic relationship that lives in or on the host. An astounding number of mutualistic relationships occur between multicellular organisms and microorganisms. Termites are only able to eat wood because they have mutualistic protozoans and bacteria in their gut that helps them digest cellulose. Inside our own bodies, there are hundreds of different types of bacteria that live just in our large intestine. Most of these are uncharacterized, but we do know a lot about E. coli, which is one of the normal bacteria found in all human large intestines. Humans provide E. coli with food and a place to live. In return, the E. coli produce vitamin K and make it harder for pathogenic bacteria to establish themselves in our large intestine. Whether or not most of the other species of bacteria found in our digestive tract aid in digestion, absorption, or vitamin production isn't completely known, but they all make it harder for invasive pathogens to establish a foothold inside us and cause disease. Now, let's say by some chance, a pathogenic bacteria does manage to establish itself in a person's large intestine. The host provides a habitat and food for the bacteria, but in return, the bacteria cause disease in the host. This is an example of parasitism or an association between two different species where the symbiont benefits and the host is harmed. Not all parasites have to cause disease. Lice, ticks, fleas, and leeches are all examples of parasites that don't usually cause disease directly, but they do suck blood from their host, and that is causing some harm, not to mention discomfort to their host. Parasites can also act as vectors or organisms that transmit disease-causing pathogens to other species of animals. The bacteria that cause the bubonic plague are carried by rodents, such as rats. The plague bacteria then infect fleas that bite the rats. Infected fleas transmit the bacteria to other animals they bite, including humans. In this case, both the flea and the bacteria are parasites, and the flea is also a vector that transmits the disease causing bacteria from the rat to the person. Commensalism is an association between two different species where one species enjoys a benefit, and the other is not significantly affected. Commensalism is sometimes hard to prove because in any symbiotic relationship, the likelihood that a very closely associated organism has no effect whatsoever on the other organism is pretty unlikely. But, there are a few examples where commensalism does appear to exist. For example, the cattle egret follows cattle, water buffalo, and other large herbivores as they graze. The herbivores flush insects from the vegetation as they move, and the egrets catch and eat the insects when they leave the safety of the vegetation. In this relationship the egret benefits greatly, but there is no apparent effect on the herbivore. Some biologists maintain that algae and barnacles growing on turtles and whales have a commesalistic relationship with their hosts. Others maintain that the presence of hitchhikers causes drag on the host as it moves through the water and therefore the host is being harmed, albeit slightly. In either case, it is unlikely that the fitness of the host is really affected by the hitchhikers, so commensalism is probably the best way to describe these relationships as well. Amensalism is an association between two organisms of different species where one species is inhibited or killed and the other is unaffected. Amensalism can occur in a couple different ways. Most commonly, amensalism occurs through direct competition for resources. For example, if there is a small sapling that is trying to grow right next to a mature tree, the mature tree is likely to outcompete the sapling for resources. It will intercept most of the light and its mature root system will do a much better job of absorbing water and nutrients - leaving the sapling in an environment without enough light, water, or nutrients and causing it harm. However, the large tree is relatively unaffected by the presence of the sapling because it isn't blocking light to the taller tree, and the amount of water and nutrients it can absorb are so small that the mature tree will not notice the difference. Amensalism can also occur if one species uses a chemical to kill or inhibit the growth of another species. A very famous example of this type of amensalism led to the discovery of the antibiotic known as penicillin. Alexander Fleming observed amensalism occurring on a plate of staphylococcus aureus surrounding a contaminating spot of penicillium mold. He was the first to recognize that the mold was secreting a substance that was killing the bacteria surrounding it. Other scientists later developed methods for mass-producing the bacteria-killing chemical which we now call penicillin. Let's review. In biology, symbiosis refers to a close, long-term interaction between two different species. But, there are many different types of symbiotic relationships. Mutualism is a type of symbiosis where both species benefit from the interaction. An example of mutualism is the relationship between bullhorn acacia trees and certain species of ants. The acacia provides food and shelter for the ants and the ants protect the tree. Parasitism is an association between two different species where the symbiont benefits and the host is harmed. Fleas, ticks, lice, leeches, and any bacteria or viruses that cause disease, are considered to be parasitic. Commensalism is an association between two different species where one species enjoys a benefit and the other is not significantly affected. Probably the best example of commensalism is the relationship between cattle egrets and large herbivores. The cattle egret benefits when insects are flushed out of the vegetation while the herbivore is unaffected by the presence of the cattle egret. And finally, amensalism is an association between two organisms of different species where one species is inhibited or killed and the other is unaffected. This can occur either through direct competition for resources, or it can happen when one species uses a chemical to kill or inhibit the growth of other species around it. A classic example of amensalism is the ability of penicillium mold to secrete penicillin, which kills certain types of bacteria. To unlock this lesson you must be a Study.com Member. Create your account Did you know… We have over 49 college courses that prepare you to earn credit by exam that is accepted by over 2,000 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Back To CourseBiology 101: Intro to Biology 24 chapters | 226 lessons
http://study.com/academy/lesson/symbiotic-relationships-mutualism-commensalism-amensalism.html
4
The following basketball resource, designed for Middle School students (aged 11-14) in the USA, has been carefully aligned with NASPE National Standards, and addresses: Moving efficiently in general space, throwing and catching, Muscular strength and endurance, aerobic capacity, Cooperation, leadership, accepting challenges. In 'This is How We Roll' the object is for O(ffence) to score a basket off of a pick and roll. Learning the pick and roll gives your team another way to take high percentage shots. Please also see the accompanying Basketball Practice Plan. In this unit, pupils will focus on developing more advanced skills and apply them in game situations in order to outwit opponents. Pupils will prepare tournaments and compete in them. They will work in groups taking on a range of roles and responsibilities to help each other to prepare and improve as a team and to develop a deeper understanding about healthy lifestyles and fitness.
https://www.pescholar.com/pe/resource/activity/basketball/
4.21875
Explain how a lunar eclipse occurs. A lunar eclipse occurs when the Sun, Moon, and Earth align as the Moon moves into Earth's shadow. Identify two properties of Earth that cause it to have changing seasons. Identify two properties of the Earth that cause it to have changing seasons. Earth's tilted axis and Earth's revolution around the Sun are the properties that lead to Earth's changing seasons. Explain the effect of each of the properties you named in the previous question. Explain the effect of each of the properties you named in the previous question. During winter in Texas, Earth's tilt causes the Norther n Hemisphere to be pointed away from the Sun. This means the Sun's rays are spread out over a large area. During summer, the Northern Hemisphere is pointed toward the Sun. Sunlight is less spread out, so areas get more solar energy and heat up. As Earth revolves around the Sun, the tilt of its axis does not change. So, when Earth gets to the other side of the Sun, it is tilted so that the Northern Hemisphere is away from the Sun . Describe how the length of the days in the Northern Hemisphere changes with the four seasons Describe how the length of the days in the Northern Hemisphere changes with the four seasons In the Northern Hemisphere, the longest days are in the summer months. The day length decreases through the fall. In winter, days are shortest. Day length increases during the spring. Earth's revolution around the Sun is a major cause of the changing day length. A particular slide catching your eye? Clipping is a handy way to collect important slides you want to go back to later.
http://www.slideshare.net/DavidSP1996/spacecyclesthinking
4
The vaccine caused the mice to create antibodies against neuraminidase, a flu protein that lets newly born virus particles escape from infected cells. The researchers also found signs that some humans carry similar antibodies. "It's hard to prove but my gut feeling would be, if people had high enough levels of this antibody, there certainly would be a reduction in severity" from H5N1 infection, says virologist Richard Webby of St. Jude Children's Research Hospital in Memphis, Tenn., whose group performed the research. "But that's the million dollar question: How much of this antibody do you have to have?" Researchers name flu viruses based on the type of hemagglutinin (HA) and neuraminidase (NA) proteins they containhence the numbers after "H" and "N" in H5N1. Flu vaccines are designed to prevent infection by eliciting antibodies against HA, which the virus uses to break into cells lining the airways. But experts have speculated that antibodies against one type of neuraminidase could provide protection against multiple flu viruses sharing the same NA type. Some studies suggest, for example, that the 1968 H3N2 flu pandemic, which killed 1 million people worldwide, was less severe than it might have been because of neuraminidase antibodies left over from the 1957 H2N2 pandemic, which killed twice as many people. Neuraminidase antibodies would not prevent a person from getting sick with the flu, because they do not stop the virus from infecting cells. To see if they would suffice to make H5N1 infection less severe, Webby and his co-workers injected mice with DNA for the neuraminidase gene from human H1N1, one of three flu subtypes covered by this winter's flu shot. Next they infected the mice with avian H5N1. After two weeks, five out of 10 of these mice survived, but none of the control mice lived. The researchers also looked at blood serum samples from human volunteers. Of 38 samples, 31 contained antibodies against H1N1 neuraminidase, presumably from past infections or vaccinations. In test tubes, seven of the serum samples inhibited the activity of neuraminidase from H5N1. The results are "very intriguing" but "it is premature to conclude that immunity induced by the [H1N1] virus will provide significant protection from illness associated with avian influenza H5N1," caution Laura Gillim-Ross and Kanta Subbarao of the National Institute of Allergy and Infectious Disease in an editorial accompanying the report, published online February 12 by PLoS Medicine. They note that with less than 300 confirmed human cases of H5N1 infection, researchers would be hard pressed to determine the amount of antibodies needed to confer protection. "There's no doubt we've got to focus on hemagglutinin" for developing pandemic flu vaccines, Webby says. The amount of neuraminidase in seasonal flu shots, he says, is unknown and likely varies from batch to batch.
http://www.scientificamerican.com/article/can-seasonal-flu-shots-help/
4.25
Scatter Plot Tool Create a scatter plot using the form below. All you have to do is type your X and Y data. Optionally, you can add a title a name to the axes. - Looking for math or statistics homework help? Our friends at My Geeky Tutor can definitely help. They are one of the most reliable and most referred statistics homework help service on the net. More about scatterplots: Scatterplots are bivariate graphical devices. The term "bivariate" means that it is constructed to analyze the type of association between to two interval variables \(X\) and \(Y\). The data need to come in the form of ordered pairs \((X_i, Y_i)\), and those pairs are plotted in a set of cartesian ads. Typically, a scatterplot is used to assess whether or not the variables \(X\) and \(Y\) have a linear association, but there could be other types of non-linear associations (quadratic, exponential, etc.). The existence of a linear association is assess by establishing how tightly the data are around a straight line. Data pairs \((X_i, Y_i)\) that are loosely clustered around a straight line have a weak or non-existing linear association, whereas data pairs \((X_i, Y_i)\) that are tightly clustered around a straight line have a strong linear association. A numerical (quantitative) way of assessing the degree of linear association for a set of data pairs is by calculating the correlation coefficient. Calculation of Confidence Intervals
http://www.mathcracker.com/scatter_plot.php
4.1875
|Search||Hot Links||What's New!| Please let me remind all of you--this material is copyrighted. Though partially funded by NASA, it is still a private site. Therefore, before using our materials in any form, electronic or otherwise, you need to ask permission. There are two ways to browse the site: (1) use the search button above to find specific materials using keywords; or, (2) go to specific headings like history, principles or careers at specific levels above and click on the button. Teachers may go directly to the Teachers' Guide from the For Teachers button above or site browse as in (1) and (2). The Mach number is a ratio between the aircraft's speed (v) and the speed of sound (a). That is, M = v/a The Mach number is named for the Austrian physicist, Ernst Mach (1838-1916). Technically, as you can see, the Mach Number is not a speed but a speed ratio. However, it is used to indicate how fast one is going when compared to the speed of sound. Scientifically, the speed at which sound travels through a gas depends on 1) the ratio of the specific heat at constant pressure to the constant volume, 2) the temperature of the gas, and 3) the gas constant (pressure/density X temperature). This is represented by the formula: a = Square Root(g R T) a = speed of sound g = ratio of the specific heat at constant pressure to the specific heat at constant volume R = universal gas constant T = Temperature (Kelvin or Rankin) Fortunately, in the earth's atmosphere (a gas) several of these variables are constant. In our atmosphere, g is a constant 1.4. R is a constant 1718 ft-lb/slug-degrees Rankin (in the English system of units) or 287 N-m/kg-degree Kelvin (in SI units). With g and R as constant values, this results in the speed of sound depending solely on the square root of the temperature of the atmosphere. Since aircraft and engines are affected by atmospheric conditions and these conditions are rarely (if ever) the same, we use a "standard day atmosphere" to give a basis for determining aircraft performance characteristics. The temperature for this standard day is 59 degrees Fahrenheit (15 degrees Celsius) or 519 degrees Rankin (288 degrees Kelvin) at sea level Thus, the speed of sound at sea level on a standard day is: a = SQRT[ (1.4) X (1718) X (519) ] = 1116 feet/second To convert this to miles per hour use the formula 1 foot/second = 0.682 miles per hour (statute miles). 1116 X 0.682 = 761 miles per hour. Explanation: How can you confirm 0.682 times the number of feet per second will equal provide the miles per hour equivalent? Let's do the math! Convert 1 foot per second to feet per minute (1 ft/sec x 60 seconds/min) = 60 ft/min Convert feet per minute to feet per hour (60 ft/min x 60 minutes/hr) = 3600 ft/hr Thus, 1 foot/second = 3600 feet/hour. All that is required now is to convert feet into miles. One statue mile = 5280 feet. Thus we divide 3600 by 5280 and our answer is 0.682. One method commonly used to prevent reinventing the wheel is to develop charts with ratios to mathematical equations. One chart available to F-15E aircrews is the "Standard Atmosphere Table." This table provides a "Speed of Sound ratio" column. This column provides the speed of sound (standard day data) ratio for any altitude based on the speed of sound at sea level of 761 MPH. |Altitude in feet||Speed of Sound ratio| Using the chart, on a standard day, the speed of sound at 10,000 feet is 761 x 0.9650 or 734 miles per hour. The ratio continues to get smaller until 37,000 feet, where it remains at 0.8671. Any idea why? HINT: Remember what we stated earlier was the ONLY factor that affected the speed of sound in the earth's atmosphere? If you stated that the temperature of the atmosphere stopped decreasing you're correct! At 37,000 feet, the temperature is a balmy -69.7 degrees Fahrenheit (or -56.5 degrees Celsius). While several supersonic aircraft like the F-15E are capable of flying faster than twice the speed of sound, they can only reach these speeds at very high altitudes where the air is thin and extremely cold. At sea level, supersonic aircraft are limited to speeds just above Mach 1 due to the atmosphere's temperature and density ("thicker" air that causes more drag on the aircraft). This page is an enhanced version of the page found on the 90th Fighter Squadron's website. Send all comments to [email protected] © 1995-2016 ALLSTAR Network. All rights reserved worldwide. |Funded in part by||Used with permission from 90th Fighter Squadron Updated: March 12, 2004
http://www.allstar.fiu.edu/aero/mach.htm
4
Even baby sharks need a safe haven—and now scientists have found the oldest known nursery for the predatory fish. Several 230-million-year-old teeth and egg capsules uncovered at a fossil site in southwestern Kyrgyzstan suggest hundreds of young sharks once congregated in a shallow lake, a new study says. Called hybodontids, the animals were likely bottom feeders, like modern-day nurse sharks. Mothers would've attached their eggs to horsetails and other marshy plants along the lakeshore. Once born, the Triassic-era babies would've had their pick from a rich food supply of tiny invertebrates, while dense vegetation offered protection from predators. Yet there's no evidence of any fin that rocked the cradle, so to speak—the babies were likely on their own, said study leader Jan Fischer, a paleontologist at the Geologisches Institut at TU Bergakademie Freiberg in Germany. (See "Shark Nursery Yields Secrets of Breeding.") In general, "shark-nursery areas are very important, because they are essential habitats for sharks' survival," Catalina Pimiento, a biologist at the Florida Museum of Natural History, said in an email. "This study expands the time range in which sharks [are known to] have used nursery areas in order to protect their young," noted Pimiento, who wasn't involved in the study. "This expansion of the range of time reinforces the importance of such zones." Ancient Sharks Lived in Fresh Water? When Fischer and colleagues first found shark egg capsules at the field site, they knew teeth shed by the newborns must be also embedded in the earth. So the team collected several sediment samples, took them to Germany, and dissolved the sediments in the lab. The work yielded about 60 teeth, all of which belonged to babies, except for a single adult tooth. The high number of baby teeth plus freshwater chemical signatures in those teeth suggest the ancient sharks spawned in fresh water, far from the ocean, according to the study, published in the September issue of the Journal of Vertebrate Paleontology. Fischer also suspects that the ancient sharks spent their whole lives in lakes and rivers, in contrast with modern egg-laying sharks, whose life cycles are exclusively marine, he noted. It's possible that, like modern-day salmon, shark adults could have migrated hundreds of kilometers upstream between the ocean and the nursery to spawn. But Fischer finds this scenario "improbable," mainly because of the sheer distance that the fish would have to cover. (Related: "Sharks Travel 'Superhighways,' Visit 'Cafes.'") Shark Fossils a Rare Find Whatever the answer, learning more about ancient sharks via fossils is rare, the study authors noted. Sharks' cartilaginous skeletons decay quickly, leaving just tiny clues as to their lifestyles. (Also see "Oldest Shark Braincase Shakes Up Vertebrate Evolution.") The "fact they got these shark teeth fossils with the egg capsules is what makes it really neat," noted Andrew Heckert, a vertebrate paleontologist at Appalachian State University in North Carolina. "Usually you find either a trace fossil [such as a skin impression] or a body fossil, and you're always trying to make the argument that these represent one or the other" theory, said Heckert, who was not involved in the study. In other words, having only one type of fossil is often not enough to draw definitive conclusions about an ancient species' behavior. "Those egg capsules," he added, "are spectacular."
http://news.nationalgeographic.com/news/2011/09/110909-baby-sharks-teeth-nursery-lakes-animals-science/
4.03125
While everyone knows about the “five senses” – sight, hearing, smell, taste, touch – little attention is paid to another of important sense, the sense of balance, unless problems develop. Many of the neuromuscular diseases affect balance. The sense of balance informs the brain about where one’s body is in space, including what direction the body moves and points and if the body remains still or moves. The sense of balance relies on sensory input from a number of systems. Disruption in any of the following systems can affect balance and equilibrium: --Proprioception involves the sense of where one’s body is in space. Sensory nerves in the neck, torso, feet and joints provide feedback to the brain that allows the brain to keep track of the position of the legs, arms, and torso. The body then can automatically make tiny changes in posture to help maintain balance. --Sensors in the muscles and joints also provide information regarding which parts of the body or in motion or are still. --Visual information provides the brain with observations regarding the body’s placement in space. In addition, the eyes observe the direction of motion. --The inner ears (labyrinth and vastibulocochlear nerve) provide feedback regarding direction of movement, particularly of the head. --Pressure receptors in the skin send information to the brain regarding what parts of the body are in space and which part touch the ground (when standing), a chair (when sitting), or the bed (when reclining). --The central nervous system (the brain and spinal cord) integrates and processes the information from each of these sources to provide one with a “sense of balance.” Problems with balance may occur in many individuals with neuromuscular disease. For example, problems with proprioception often occur in individuals with diseases such as Freidreich’s ataxia, Charcot Marie Tooth, myopathy, and spinal muscular atrophy due to loss of sensation in the joint. Sensory loss at the skin may affect the pressure receptors of a person with Charcot Marie Tooth. Visual losses and severe muscle weakness can lead to balance difficulties for those with mitochondrial myopathy. Loss of muscle strength occurring in many of the neuromuscular disease can also contribute to balance problems. Problems with balance can also contribute to problems such as poor gait, clumsiness, and falling in children and adults. Falling can cause injury, including minor injuries such as cuts and bruises, as well as major injuries such as bone fractures and head injury. Poor balance can also lead to sensations such as disequilibrium, light-headedness, dizziness, and vertigo. Balance problems may also lead to social embarrassment. Individuals with problems in one of the systems related to balance may rely more heavily on other systems for maintaining balance. For example, an individual with a deficit in proprioception may rely more heavily on visual input to maintain balance. Balance may then be more obviously impaired when input from that sense is not available, for example when walking in darkness. Individuals with neuromuscular disease may benefit from consulting their physicians regarding methods for improving balance. Methods may include learning new movement habits, improving concentration and attention to movement, engaging in physical therapy and appropriate moderate exercise, making home modification, and using of assistive devices. Even though sense of balance has often been overlooked unless problems develop, one’s sense of balance provides important sensory information that impacts quality of life. A better understanding of this important sense can help individuals to cope better with the challenges of living with neuromuscular disease. CMTA, (n.d.). What is CMT? Retrieved on 11/3/15 from http://www.cmtausa.org/understanding-cmt/what-is-cmt/ . Kids Health from Nemours, (n.d.). Balance Disorders. Retrieved on 11/3/15 from http://kidshealth.org/parent/medical/ears/balance_disorders.html . MDA, (2008). A Teacher’s Guide to Neuromuscular Disease. Retrieved on 11/3/15 from http://www.mda.org/publications/tchrdmd/ . MedicineNet.com, (2002). Balance – How Do We Do It? Retrieved on 11/3/15 from http://www.medicinenet.com/script/main/art.asp?articlekey=14637 . Medvascek, C., (2002). All Fall Down: Staying Upright With a Neuromuscular Disease. Quest, 9:6. Retrieved on 11/3/15 from http://www.mda.org/publications/quest/q96fall_down.html . NINDS, (2014). Friendreich’s Ataxia Fact Sheet. National Institute of Neurological Disorders and Stroke, National Institutes of Health website. Retrieved on 11/3/15 from http://www.ninds.nih.gov/disorders/friedreichs_ataxia/detail_friedreichs_ataxia.htm .
http://www.bellaonline.com/ArticlesP/art173822.asp
4.28125
Average Rate of Change A rate is a value that expresses how one quantity changes with respect to another quantity. For example, a rate in "miles per hour" expresses the increase in distance with respect to the number of hours we've been driving. If we drive at a constant rate, the distance we travel is equal to the rate at which we travel multiplied by time: Dividing both sides by time, we have If we drive at 50 mph for two hours, the distance we'll travel is 50 mph × 2 hrs = 100 miles. If we drive at a constant speed for 3 hours and travel 180 miles, we must have been driving In real life, though, we don't drive at a constant rate. When we start our trip through Shmoopville, we first climb into the car, traveling at a whopping 0 miles per hour. We speed up gradually (hopefully), maybe need to slow down and speed up again for traffic lights, and finally slow down back to a speed of 0 when reaching our destination, The Candy Stand. We can still divide the distance we travel by the time it takes for the trip, but now we'll find our average rate: To calculate the average rate of change of a dependent variable y with respect to the independent variable x on a particular interval, we need to know - the size of the interval for the independent variable, and - the change in the dependent variable from the beginning to the end of the interval. Depending on the problem, we may also need to know - the units of the independent and dependent variables. Then we can find The average rate of change of y with respect to x is the slope of the secant line between the starting and ending points of the interval: Relating this to the more math-y approach, think of the dependent variable as a function f of the independent variable x. Let h be the size of the interval for x: and let a be one endpoint of the interval, so the endpoints are a and a + h, with corresponding y-values f(a) and f(a + h): Then the slope of the secant line is We also write this as This is the definition of the slope of the secant line from (a, f(a)) to (a + h, f(a + h)).
http://www.shmoop.com/derivatives/average-rate-change.html
4.03125
Some books and web-sites describe the tides as being a result of "centrifugal force" due to the Earth and Moon moving. But that's not actually correct, because the tides would exist even if the Earth and Moon weren't moving! "Velocities" are not really involved. So those books are actually wrong, even textbooks! If the Earth and Moon were somehow standing still, there would still be tidal bulges, at least for a while until they later crashed together due to gravity! Even the tide that exists on the backside of the Earth would exist!| Newton's equation for Gravitation is all you need. It is in all books that describe gravitation. F = G * M1 * M2 / R2. Gravitation therefore depends on the "mass" (most people incorrectly think of it as weight) of each of the objects involved. It is an "inverse square" relationship, where the attractive force depends on the distance between the two objects. There is also a constant 'G' that just makes the numbers come out right for the system of measurements (feet, meters, seconds, etc) that we use. One way of looking at it is that the water from other Oceans would try to flow to that spot, because of that "excess acceleration" is essentially trying to "pile up water" there. So, even if the Earth and Moon did not move, or rotate, it would form a "hill of water" or "tidal bulge" at that location, directly under the location of the Moon, due to the upward "excess acceleration" due to the Moon's gravitation and that lesser distance. It turns out that that "tidal bulge" would be a little over two feet high, not noticeable in the enormity of the Earth! But the Earth and Moon both move! Specifically, the Earth rotates once every day. This makes the Earth actually rotate UNDERNEATH that (unavoidable) tidal bulge that the Moon's gravitation constantly causes. It seems to us that there are tides that move across the oceans, but the actual tidal bulges do not really move very much, and it is actually that the Earth (and us) are rotating past that bulge that makes it seem to be moving to us. By the way, each time the Moon is overhead, YOU are also being attracted UPWARD by the Moon, as well as downward by the Earth. It actually changes your weight! But by only by a really, really tiny amount! (around one part in nine million, one one-hundredth of a gram!) Since R is small compared to D (about 1/60 of it) this is nearly equal to G * Mmoon * (2 * D * R) / D4, or (a constant) / D3. Tidal acceleration is therefore approximately proportional to the CUBE of the distance of the attracting body. If the Moon were only half as high above the Earth, the tidal accelerations would be EIGHT TIMES as great as they are now! As it happens, both the Moon and the Sun cause such tides in our oceans. The Moon is about 400 times closer than the Sun, so it causes a tidal acceleration equal to 4003 or 64 million times that of an identical mass. But the Moon's mass is only about 1/27,000,000 that of the Sun. The result is that the Moon causes a tidal acceleration that is about 64,000,000/27,000,000 or 64/27, or a little more than double of that of the Sun. That's why the tides due to the Moon are larger than those due to the Sun. At different times of a month, the Sun and Moon can cause tides that add to each other (called Spring Tides) (at a Full Moon or a New Moon, when their effects are lined up.) However, at First Quarter or Third Quarter Moon, the tides cause by the two sort of cancel each other out (with the Moon always winning!) and so there are lower tides then, called Neap Tides. It's possible to even estimate the size of the tides, using that same result above, G * Mmoon * (2 * D * R) / D4, or G * Mmoon * 2 * R) / D3. We know the values of all those quantities! In the metric system, G = 6.672 * 10-11, Mmoon = 7.3483 * 1022 kg, R = 6.37814 * 106 m, and D = 3.844 * 108 m. This gives a result for the relative acceleration as being 1.129 * 10-6 m/s2, about one nine-millionth of the acceleration of the Earth's gravity on us. In case you are curious, we know that the acceleration due to gravity of the Earth is around 9.8 m/s2. So the actual downward acceleration on us and water is reduced by about 1/8,700,000, when the Moon is at its average distance, and the Moon is overhead. If you happen to weigh 200 lbs (90 kg), your scale would show you lighter by about 1/2500 ounce (or 0.01 gram), when the Moon was overhead as compared to six hours earlier or later. That is not enough that you could ever be aware of it or measure it. That 1.129 * 10-6 m/s2 is the distorting ACCELERATION, and that number is precise. Of course, it depends on the current distance of the Moon which is in an elliptic orbit, and also the celestial latitude of the Moon for the specific location. A truly precise value is difficult since it changes every minute! We can now compare two locations on Earth regarding this matter, one directly above the other. We know that we have F = G * M1 * M2 / R2 for both locations. By eliminating the mass of our object, we have a = G * M1 / R2. So we could make a proportion for the two locations, alower G * Mearth / Rlower2 _____ = _________________ ahigher G * Mearth / Rhigher2 The mass of the Earth and G cancel out and we have the proportion of the accelerations equal to Rhigher2 / Rlower2. alower Rhigher2 _____ = _________________ ahigher Rlower2 In our case, we know the proportion of the accelerations, different by 1.129 * 10-6 m/s2 /9.8 m/s2 so the left fraction is 1.0000001152. The radius R therefore has to be in the proportion of 1.0000000576 so that its square would be the acceleration proportion. If we multiply this proportion (difference) by the radius of the Earth, 6.378 * 106 meters, we get 0.367 meter, which is the required difference in Earth radius to account for the difference in the acceleration. If the Moon were NOT there, the equilibrium height of the ocean would have been the radius of the Earth. We have just shown that due to the differential acceleration of the Moon on the water, we have a new equilibrium radius of the Earth's oceans there which is around a third of a meter greater. This suggests that the oceans' tidal bulge directly under where the Moon is should be around 0.367 meter or 14.5 inches high out in the middle of the ocean. The Sun's tidal bulge is similarly calculated to around 0.155 meter or 6.1 inches. When both tides match up, at Spring Tide at Full Moon or New Moon, that open ocean tide should be around 0.522 meter or 21 inches high. When they compete, at Neap Tide at First or Third Quarter Moon, they should be around 0.212 meter or 8.5 inches high in the open ocean. These numbers are difficult to measure experimentally, but all experiments have given results that are close to these theoretical values. However, the Earth itself also "bends" due to the tidal effects and so the Earth itself (the ocean bottom) has an "Earth tide" that occurs. It is not well known as to size, and it is certainly small, but estimates are that it is probably just a few inches in rise/fall. The actual distortion that occurs in the Earth and in the waters of the oceans are extremely complex, so we are just using an estimate here. (We are preparing another web-page that discusses the actual flow patterns of the tides in the oceans of the Earth.) In any case, this then results in open ocean tides as being around a foot or foot and a half high, which seems to agree with general data, such as at remote islands in the middle of oceans. For the water, we have been assuming that water can flow fast enough to balance out, therefore the level of water would be at an iso-gravitational constant value. This is not really the case, and water takes a while to build up that tidal bulge. If the Earth rotated REALLY slowly, it would be more true. But at the Equator, the rotation of the Earth is over 1000 miles per hour, and the depth of the oceans causes a limit on the speed of deep ocean waves to be around 750 mph. This results in the tidal bulge always being dragged or carried along by the rapidly rotating Earth, to a location which is actually not underneath where the Moon is! The tidal bulge is NEVER directly under the Moon as it is always shown in school books! Instead, the reality is that the tidal bulges are generally several thousand miles away from that location. We see reason to cut some slack in school books regarding this, as it would just add an additional complication into the basic concept of the tides. Notice that no centrifugal force was ever mentioned in this calculation, and it is a direct application of Newton's equation for gravitation. The books that, in attempting to describe why tides exist, DO choose to use arguments that involve centrifugal and centripetal forces, can therefore be rather misleading. There IS a way to do that and get an approximately correct result, but I think that it really overlooks the fact that it all is purely a simple gravitational result, and it gives the impression of tides only occurring due to a "whirling" effect of the Earth and Moon, which is NOT true. The only real value I see in those approaches is that the tide on the backside of the Earth might SEEM to be a little more logical. The mathematical description given above accurately describes that opposite tide, too, and even shows that it is always slightly less high, which might not seem as obvious when trying to use the "whirling" idea. The same is still true when considering the "solid earth" and a particle of water in an ocean opposite where the Moon happens to be. It might seem amazing, but the Moon actually gives the entire Earth a greater acceleration, because of being closer, than it does for that puny bit of water in that backside ocean! And so all the equations above can be used again, with the one difference that our new distance from the Moon to the water is D + R instead of D - R. When you plug in the numbers, you get a slightly smaller differential acceleration, 1.074 * 10-6 m/s2 (instead of the slightly larger 1.129 * 10-6 m/s2 that we got for the front side water.) Using the same calculation as above, we get a Moon-caused tidal change in radius of 0.349 meter (instead of 0.367 meter as before) or 13.8 inches. That is only 0.018 meter or around 3/4 of an inch difference in the heights of the front and back (Moon-caused) tides. The rear side tide IS therefore slightly smaller, but not by very much! The same reasoning and calculation applies to the Sun-caused tidal bulges, so the total Spring Tide difference is actually around one inch. (I have never seen any other presentation explain why the rear-side tide is slightly smaller than the Moon-side tide is, or the calculation of exactly how much that difference is!) Again, please notice that we have not only proven that the rear side tide exists but even calculated its size, without having to use any centrifugal force! It is frightening that even a lot of school textbooks present a REALLY wrong explanation! Well, water has a lot of friction, both with itself (viscosity) and with the seafloor (drag), so the tidal bulges travel (relatively) at slower speeds across the oceans! Since the Earth rotates so fast, this results in the tidal bulge(s) lagging and being "dragged forward" of that location. Also, instead of the actual water shooting all over the oceans at extremely high speed, the actual water tends to move relatively little, and the wavefront of the tide can pass at such high speed (often around 700 mph in deep open ocean) without much noticeable effect of actual movement of water. (No sailor ever senses any water shooting past his ship at 700 mph! Instead, he basically senses nothing, as his ship is gradually raised up about a foot over a period of several hours.) The 700 mph speed cited here is dependent on the DEPTH of the water, by a well-known and rather simple formula giving the maximum speed of wave velocity due to the local water depth. For the common depths of the central oceans, that speed is around 700 mph. The fact that friction between the water and the seafloor causes the tidal bulges to be shifted away from the line between the Earth and Moon has some long-term effects. That friction is actually converting a tiny amount of the Earth's rotational energy into frictional heat energy, which is gradually slowing down the Earth's rotation! Our days are actually getting longer, just because of the Moon! But really slowly. A thousand years from now, the day will still be within one second of the length it is now. The long-term effects of this are discussed below. By the way, a "tiny amount" of the Earth's rotational energy is still quite impressive! The amount of frictional energy lost from the Earth's rotation is actually around 1.3 * 1023 Joules/century (Handbook of Chemistry and Physics). That is 1.3 * 1021 watt-seconds per year, or 3.6 * 1014 kilowatt-hours each year. For comparison, figures provided in the World Almanac indicate that the entire electric consumption of all of the USA, including all residential, industrial, commercial, municipal and governmental usages, and waste, totaled around one one-hundredth of that amount (3.857 * 1012 kWh) in 2003! (This fact, that natural ocean-seafloor frictional losses due to tidal motions are taking a hundred times the energy from the Earth's rotation as all the electricity we use, and still barely having any effect on the length of our day, inspired me to seriously research the concept of trying to capture some of the Earth's rotational energy to convert it into electricity. It would be an ideal source of electricity, with no global warming, no pollution, and no rapid consumption of coal, oil or nuclear just to produce electricity! A web-page on that concept is at Earth Spinning Energy - Perfect Energy Source) There are actually two situations regarding tides approaching land, which are actually closely related. Say that our one+ foot high tidal bulge of extra water is traveling at VERY high speed across the open ocean, and then it arrives at a Vee-shaped Bay, like the Bay of Fundy in Canada. Say that the widest part of such a bay is 100 miles wide, so we start out with a wave that is one foot high and 100 miles wide. Remember that water is incompressible! Now, follow this wave as it enters such a bay. As it gets around halfway up the bay, the bay is only half as wide, 50 miles. But there is still just as much water in the oncoming wave. In order for all that water to squeeze into a 50 mile width, do you see that it must now be TWO feet high? Continuing this logic, if the bay narrows to two miles wide at the end, all that water would have to still be there, and it would now have to be a fifty-foot-tall tide. There are additional compounding effects in that waves travel at speeds that depend on the depth of the water. As the waves are moving up such a bay, the water keeps getting shallower, and the wave velocity (called celerity) greatly slows. The very inner end of the Bay of Fundy actually has such tides! This "funneling effect" of the shape of that bay causes it. All during that process, though, there is a lot of friction, with the bottom of the bay, and among the waters and surf. These energy losses actually reduce the growth effect described above, and for bays that do not have such funnel shapes, tides in those bays are relatively moderate in size. But in those uniquely shaped bays, the tides are very impressive. At the very inner end, the tide comes in so quickly and so strongly that it even has a special name, a "tidal bore". A fairly large river at the end of the Bay of Fundy winds up flowing BACKWARDS briefly about twice each day due to the intensity of the tidal bore there! The other situation is when a tide from the open ocean runs into a continent straight on. Due to natural erosion and many other effects, most shorelines gradually slope outward, getting deeper and deeper as you go farther from the shore. When an approaching tidal bulge gets to this vertically tapering area, a situation a lot like the funnel-shaped-bay width effect occurs. The wave essentially gets lifted upward as it moves up the "ramp". The actual shape of the contours of the slopes near continents greatly affects this process. If the slope is too shallow or too steep, or if it has irregular slope, much more energy is lost in friction and minimal tides are seen at the shoreline. But for some locations, it explains why the measured tides are much higher than the open ocean tides are. Most of the East Coast of the US has tides that are several feet high. This discussion should help to explain the incredible complexity of the actual tides seen. Even worse, erosion and deposition are continuously modifying the contours of the ocean bottom, so these effects change over time. All these effects contribute to making the precise prediction of the size and arrival time of tides a very complex and imperfect science. There is a way to calculate that eventual length of day/month. It relies on the fact that angular momentum must remain constant in a system that has no external torques applied to it. That means we must first calculate the total angular momentum of the Earth-Moon system. There are four separate components of it, the rotation and the revolution of each of the Earth and Moon. Angular momentum is the product of the rotational inertia (I) and the angular velocity (ω). For two of these situations, the revolution components, the rotational inertia is I = M * r2. The r dimension is the distance between the barycenter of the system and the center of the individual body. (The Moon does NOT actually revolve around the Earth, but they BOTH revolve around each other, or actually a location that is called the barycenter of the system. In the case of the Earth-Moon system, the barycenter happens to always be inside the Earth, roughly 1/4 of the way down toward the Center of the Earth, on a line between the centers of the Earth and Moon.) For the Earth, this gives I = 2.435 * 1038 kg-m2. For the Moon, this gives I = I = 1.049 * 1040 kg-m2. For these two components, the angular velocity is one revolution per month, or 6.28 radians in 27.78 days, or 2.616 * 10-6 radians/second. That makes these two (revolution) angular momentum components: Earth = 6.371 * 1032 kg-m2/sec Moon = 2.745 * 1034 kg-m2/sec The rotational inertia of the Earth and Moon are somewhat more complicated to calculate, primarily because the density gets greater toward the center of each. You can find the derivation in some advanced geophysics texts. For the Earth, the currently accepted value is: I = 8.07 * 1037 kg-m2 Since the Earth rotates once a day, the angular velocity of it is 7.27 * 10-5 radians/second. That makes the Earth-rotational angular momentum: Earth = 5.861 * 1033 kg-m2/sec It turns out that the component due to the Moon's rotation is extremely small. The actual rotational inertia of the rotation of the Moon is actually not known, but it is around 6 * 1034 kg-m2. That makes the Moon-rotational angular momentum: Moon = 1.6 * 1029 kg-m2/sec Totaling up these four components, we have 3.391 * 1034 kg-m2/sec as the TOTAL angular momentum of the system. The laws of Physics say that this angular momentum must be conserved, must always exist with the same total. The four terms are now: Earthrev = m * R2 * (ω) or 6 * 1024 kg * (D/60.37)2 * 6.28 / (Day-length) or 1.03 * 1022 * D2 / (Day-length) Moonrev = m * R2 * (ω) or 7.34 * 1022 kg * ((D*59.37)/60.37)2 * 6.28 / (Day-length) or 4.458 * 1023 * D2 / (Day-length) Earthrot = I * (ω) or 8.07 * 1037 * 6.28 / (Day-length) or 5.03 * 1038 / (Day-length) Moonrot = I * (ω) or 6 * 1034 * 6.28 / (Day-length) or 3.77 * 1035 / (Day-length) Totaling all four components, the two pairs can be combined: 4.561 * 1023 * D2 / (Day-length) + 5.03 * 1038 / (Day-length). This total must equal the previously calculated total angular momentum of 3.391 * 1034 kg-m2/sec. It turns out that Kepler discovered a relationship between the time interval of revolution and the distance between them. The square of the time interval is proportional to the cube of the distance between them. In our problem, this results in the D2 term being able to be replaced by (Day-Length)1.333 * 4.589 * 108. This results in our equation becoming: 2.093 * 1032 * (Day-length)0.333 = 3.391 * 1034 which solves to a Day-length of 4.208 * 106 seconds, which is equal to about 48.7 of our current days. This arrangement would have a spacing of 5.58 * 108 meters or around 347,000 miles, as compared to the current 238,000 miles. This then indicates that the Moon will continue to very slowly move outward, apparently for hundreds of millions of years, until it eventually gets to that distance. At the same time, the length of our day will continue to very slowly get longer until it becomes about 49 times as long as now! All these results are directly calculated from the basic laws of Gravitation! I have seen previous estimates where the final distance would be about 400,000 miles and the period would be 55 days. Those numbers were apparently calculated over a hundred years ago, by the mathematician Sir George Darwin. It is not obvious that anyone else has done these calculations, a central reason for my composing this presentation. The possibility exists that the figure for the rotational inertia of the Earth was not known as well by Darwin as it is now. I believe that the above calculations reflect the current knowledge of the values. There is also a factor regarding the Earth losing kinetic energy of rotation due to frictional heating of the ocean tides against the ocean bottoms and the continents. This is considered to be about 1.3 * 1021 Joules (watt-seconds) per year. (Handbook of Chemistry and Physics). (For comparison, this energy consumption is equal to approximately 100 times all the electricity used in the USA!) Researchers have rather accurately determine that each year is around 16 microseconds longer than the year before. Interestingly, the friction of the tides against the seafloor and the continents should cause a slowing of around 22 microseconds each year but that there are some effects that actually cause a secular increase in the rate of spin of the Earth (which we shall not discuss here!) We therefore have a day that (should) increase in length due to this effect by 22 microseconds each year. The amount of friction between the water and the moving Earth under and in front of it depends on the speed of that differential motion. It is a reasonable assumption that it depends directly (proportionally) on the speed of that motion, that is the rotation rate of the Earth. This being the case, we can apply some simple Calculus to do a little differentiating and integrating to establish that there is a simple exponential relationship. Specifically, we find that ln(86400 - 0.000022) - ln(86400) = k * t, where t is the number of seconds in a year, that is 3.1557 * 107. This lets us calculate the value of k to be 8.0806 * 10-18. This value uses base e, and we can convert to a formula which uses base 2 by simply dividing this value by the value of ln(2), which then gives 1.16578 * 10-17. We then have that the ratio of the rate of the Earth's spin is equal to 2-t * k. To find how long it will be until the Earth would be rotating at half the current rate (twice as long a day) (due ONLY to the tidal friction effect!), just set this equal to 1/2. For this, clearly t must equal the inverse of k, so that we would have 2-1 which is 1/2. Therefore, that would be 1 / (1.16578 * 10-17) seconds from now, which is 8.59138 * 1016 seconds or 2.722 billion years! Quite a while! If you're still around 2.7 billion years from now, each day figures to be twice as long as it is now! Going back the other way, 2.7 billion years ago, the Earth was spinning much faster. Again, if we consider just the effects of tidal friction, and if the oceans and continents were similar to as they are now, the Earth would have been spinning twice as fast, with 730 days each year! Probably even faster, because the Moon was then closer, and therefore the tidal effect would have been still greater. If the oceans formed around 4 billion years ago, then this equation gives a t * k product of around 1.5, and our spin ratio would have been 2.83, so the Earth must have then spun in around 8.5 of our modern hours, nearly three times as fast as now. And probably even faster than that, because the Moon was certainly much closer then and also those giant tidal bulges must have had tremendous friction when running into a continent at 3,000 miles per hour! I betcha that beach erosion must have been something really impressive then! Every four hours, a giant tidal wave (ACTUALLY a tidal wave and not the mis-named tsunami!) would crash into each continent at the 3,000 mph rotation speed of the Earth. Wow! In any case, as a bonus, we have shown that the early Earth must have rotated probably more than three times as rapidly than today, at least until the oceans formed and the Moon started having the tidal effects that has slowed down our rotation to what it is today. And also that the slowing effect is pretty slow, and the exponential equation given above indicates that around 9 billion years from now, the Earth will have slowed to around 1/10 the current rate (36 REALLY long days every year!) and that would still be far from the eventual day length when the Earth and Moon will have finally gone into being locked-up facing each other forever. Since we believe the Sun will have used up its Hydrogen fuel far before that, it would occur in total darkness! If you are an inquisitive sort, you could use the angular momentum conservation analysis given way above to figure out just how close the Moon must have been at that time when the oceans were forming! I WILL give you a clue that the month was then around 5 modern earth days long, and the Moon was HUGE in the sky, ballpark around 1/3 as far away as now! Do it! Show me that you can! Keep in mind that the precise values were probably not actually these, as we have only considered the Moon's tidal effects and have ignored various other effects that affect the rotation rate of the Earth. In fact, we made a basic assumption regarding the effect BEING PROPORTIONAL TO THE RELATIVE SPEED. That is not necessarily true, and for example, it might instead be proportional to the SQUARE of the relative speed, which would materially change these calculations. The general time scales would still be extremely long though. With really accurate modern equipment, it has been found that even the seasonal difference of the weight of snow accumulating near the pole in winter has a measurable effect on changing our total rotational inertia (as water, much of that mass of water would have been nearer the equator, slightly changing I) so we speed up and slow down for a lot of such reasons, but all of those effects are really pretty small! In addition to all this, we know that continents wander around due to Plate Tectonics over these same long time scales. It seems clear that there must have been times when the arrangement of continents were better or worse than they are now regarding interfering with the tidal flows around the Earth, which is another factor which cannot be decently estimated. So numerical results such as these are necessarily very approximate, which is part of the reason we chose to use the simplest of all assumptions regarding the relative speed factor. The reasoning presented above, based on that assumption, would indicate that the length of the day on Earth would have been 16 (modern) hours around 1.6 billion years ago. I have recently been told that someone has recently apparently done a similar analysis where their results imply that the day was 16 hours long around 0.9 billion years ago. I am not familiar with the specific analysis used to get such a number, so it is not possible to comment on its likely accuracy. (The analysis given here implies that the day was around 19 [modern] hours long at that time.) I am not really concerned regarding the precise accuracy of either result, but rather have interest in the PROCESS of the analysis, so that any reader of this presentation should be capable of doing the complete analysis for him/herself. Link to a thorough mathematical discussion of assorted Moon issues, is on Origin of the Moon - A New Theory Link to a slightly unrelated subject, that of trying to capture some rotational energy of the Earth for generating electric power Earth Spinning Energy - Perfect Energy Source This page - - - - is at This subject presentation was last updated on - - C Johnson, Theoretical Physicist, Physics Degree from Univ of Chicago
http://mb-soft.com/public/tides.html
4
July 12, 2014 How To Measure A Sun-Like Star’s Age John P. Millis, Ph.D. for redOrbit.com - Your Universe Online The holy grail of planetary astronomy is to find a solar system that mirrors our own. While a lot of effort has been placed on finding a planet with Earth-like properties – the right size, an atmosphere, the right temperature – are of equal importance in the search for a Sun-like star.Such a glowing orb would need to have a similar mass, temperature, and spectral type. These parameters are somewhat easy to measure, but of greater difficulty is measuring stellar age. Over time stars change in brightness, consequently their interaction with the planets that orbit around them evolves as well. A new technique has emerged that is helping researchers estimate the age of a star. Known as gyrochronology, astronomers measure the changing brightness of a star caused by dark spots – known as starspots – crossing the stellar surface. From this the rotation speed of the star can be determined. This is important because stars initially spin more rapidly, and then slow as they age. The challenge is that the variation in the stellar brightness is small, typically less than a few percent, but luckily NASA’s Kepler spacecraft has a sensitivity great enough to discern such minute changes. By measuring the spins of stars in a 1-billion year old star cluster known as NGC 6811, a previous study led by astronomer Soren Meiborn, was able to create a calibration table correlating the spin rate with age for various star types. Prior to this new study, accepted for publication in The Astrophysical Journal Letters, researchers had only cataloged two Sun-like stars with measured spins and ages. But the forthcoming paper details 22 new objects meeting the criteria. "We have found stars with properties that are close enough to those of the Sun that we can call them 'solar twins,'" says lead author Jose Dias do Nascimento of the Harvard-Smithsonian Center for Astrophysics (CfA). "With solar twins we can study the past, present, and future of stars like our Sun. Consequently, we can predict how planetary systems like our solar system will be affected by the evolution of their central stars." Nascimento and his team also found that the Sun-like stars identified in their study had an average rotational period of about 21 days, similar to the 25-day rotation period of our Sun at its equator. (The Sun displays a differential rotation, meaning that it rotates faster at the equator than it does at its poles.) Unfortunately, none of the 22 stars in this study are known to have planets around them. But, as this work is expanded upon to include other stars, astronomers can begin understanding how the evolution of a star affects the evolution of the planets that orbit about it. Image 2 (below): Finder chart for one of the most Sun-like stars examined in this study. KIC 12157617 is located in the constellation Cygnus, about halfway between the bright stars Vega and Deneb (two members of the Summer Triangle). An 8-inch or larger telescope is advised for trying to spot this 12th-magnitude star. Credit: CfA, created using StarWalkHD (VT 7.0.3) and MAST and the Virtual Observatory (VO) Keep an eye on the cosmos with Telescopes from Amazon.com
http://www.redorbit.com/news/space/1113189918/astronomical-hunt-for-sun-like-stars-071214/
4.0625
New Carbon Nanomaterial A simple chemical trick changes graphene into a compound with different electronic properties. Graphene, a single layer of carbon atoms arranged in a honeycomb-like structure, has captured worldwide interest because of its attractive electronic properties. Now, by adding hydrogen to graphene, researchers at the University of Manchester, U.K., have made a new material that could prove useful for hydrogen storage and future carbon-based integrated circuits. While graphene is highly conductive, the new material, called graphane, is an insulator. The researchers can easily convert it back into conductive graphene by heating it to a high temperature. Andre Geim, who led the research and first discovered the nanomaterial in 2004 with Kostya Novoselov, says that the findings suggest that graphene could be used as a base for making entirely new compounds. The hydrogenated compound graphane had been theoretically predicted before, but no one had attempted to create it. “What’s important is that you can make another compound of [graphene] and can chemically tune its electronic properties to what you want so easily,” Geim says. Adding hydrogen to graphene is just one possibility. Using other chemicals could yield materials with even more appealing properties, such as a semiconductor. “Hydrogenation may not be the end of the exploration; it may be just the beginning,” says Yu-Ming Lin, a nanotechnology researcher at the IBM Thomas J. Watson Research Center, in Yorkstown Heights, NY. The latest findings are a step toward practical carbon-based integrated circuits, which could be used for low-power, ultrafast logic processors of the future. The findings also open up the possibility of using graphene for hydrogen storage in fuel cells. “Graphene is the ultimate surface because it doesn’t have any bulk–only two faces,” Geim says. This large surface area would make an excellent high-density storage material. As described in Science, the researchers make graphane by exposing graphene pieces to hydrogen plasma–a mixture of hydrogen ions and electrons. Hydrogen atoms attach to each carbon atom in graphene, creating the new compound. Heating the piece to 450 °C for 24 hours reverts it back to the original state. Geim says that the researchers did not expect to be able to make the new substance so easily. One of graphene’s promises for electronics is that it can transport electrons very quickly. Transistors made from graphene could run hundreds of times faster than today’s silicon transistors while consuming less power. Researchers are making progress toward such ultrahigh-radio-frequency transistors. But combining the transistors into circuits is a challenge because graphene is not an ideal semiconductor like silicon. Silicon transistors can be switched on and off between two different states of conductivity. Graphene, however, continues to conduct electrons in its off state. Circuits made from such transistors would be dysfunctional and waste a lot of energy. One way to improve the on-off ratio in graphene transistors and bring them on par with those made of silicon is to cut the carbon sheet into narrow ribbons less than 100 nanometers wide. But making consistently good-quality ribbons is difficult. Altering the material chemically may be an easier way to tailor its electronic properties and get the properties sought, Geim says. And that means that researchers could fabricate graphene circuits with nanoscale transistors that are smaller and faster than those made from silicon. “Imagine a wafer made entirely of graphene, which is highly conductive,” he says. “[You can] modify specific places on the wafer to make it semiconducting and make transistors at those places.” Areas between the transistors could be converted into insulating graphane, in order to isolate the transistors from each other. The new work is just a preliminary first step. The researchers still need to thoroughly test the electronic and mechanical properties of graphane. Converting the material into a decent semiconductor might take a lot more chemical tinkering. Besides, graphene researchers face one big challenge before they can do anything practical: coming up with an easy way to make large pieces of good-quality material in sufficient quantities. “For many applications, one needs a significant amount of material,” says Hannes Schniepp, who studies graphene at the College of William and Mary. “And that’s yet to be demonstrated for graphene or graphane.”
https://www.technologyreview.com/s/411829/new-carbon-nanomaterial/
4.21875
The vast majority of robots do have several qualities in common. First of all, almost all robots have a movable body. Some only have motorized wheels, and others have dozens of movable segments, typically made of metal or plastic. Like the bones in your body, the individual segments are connected together with joints. Robots spin wheels and pivot jointed segments with some sort of actuator. Some robots use electric motors and solenoids as actuators; some use a hydraulic system; and some use a pneumatic system (a system driven by compressed gases). Robots may use all these actuator types. A robot needs a power source to drive these actuators. Most robots either have a battery or they plug into the wall. Hydraulic robots also need a pump to pressurize the hydraulic fluid, and pneumatic robots need an air compressor or compressed air tanks. The actuators are all wired to an electrical circuit. The circuit powers electrical motors and solenoids directly, and it activates the hydraulic system by manipulating electrical valves. The valves determine the pressurized fluid's path through the machine. To move a hydraulic leg, for example, the robot's controller would open the valve leading from the fluid pump to a piston cylinder attached to that leg. The pressurized fluid would extend the piston, swiveling the leg forward. Typically, in order to move their segments in two directions, robots use pistons that can push both ways. The robot's computer controls everything attached to the circuit. To move the robot, the computer switches on all the necessary motors and valves. Most robots are reprogrammable -- to change the robot's behavior, you simply write a new program to its computer. Not all robots have sensory systems, and few have the ability to see, hear, smell or taste. The most common robotic sense is the sense of movement -- the robot's ability to monitor its own motion. A standard design uses slotted wheels attached to the robot's joints. An LED on one side of the wheel shines a beam of light through the slots to a light sensor on the other side of the wheel. When the robot moves a particular joint, the slotted wheel turns. The slots break the light beam as the wheel spins. The light sensor reads the pattern of the flashing light and transmits the data to the computer. The computer can tell exactly how far the joint has swiveled based on this pattern. This is the same basic system used in computer mice. These are the basic nuts and bolts of robotics. Roboticists can combine these elements in an infinite number of ways to create robots of unlimited complexity. In the next section, we'll look at one of the most popular designs, the robotic arm.
http://science.howstuffworks.com/robot1.htm
4.0625
An Oil Tanker Runs Aground Off the California Coast; Plan and. Execute an Appropriate ... High School Physics, Mathematics, Oceanography. Grades 9-12. California .... single vector (Students do exactly this in Part 2 of this lesson). In order to ... This sample Mathematics lesson plancaptioned "CeNCOOS Classroom Series – Module 1" provides more info about math, etc. To make sure that this data is what you need, before you download this Mathematics lesson plan, please learn this data first by click the following link. On the other hand, if you want to save this data directly into your computer, you can download this pdf Mathematics lesson plan through the following download link. Learners of pre-kindergarten through 8th level can be determined as young learners. Designing teaching note for young learners should be imaginative in selecting the teaching practices because of their characteristics. It is better for trainer to know the characteristics of young learners before designing the lesson plan for them. Wendy A. Scott and Lisbeth H. Ytreberg describe the characteristics of young learners. Here we will talk it in related to the class inspiration. First, Young learners like to co-operate and physically active. That is why teachers have to design the practices which involve the students to participate for example: desigining a work in group or individually, like interesting quizes which involved the physical practices. Second, young learners are happy to play. They learn best when they are enjoying themselves. In related in playing, we now have many improvements in teaching Math strategy for children through games or even the full-colored and entertaining worksheet design. It is easy to look for where we can find the fun Math worksheet on the online resources. Third, young learners cannot concentrate for a long time. Teacher should have a great techniquein dividing teaching time from the beginning till the end of the class. Young learners are happier with different materials and they cannot remember things for a long time if it is not repeated. So keep repeating the lesson with various fun ways. Five, seven or twelve years old Students will grow as thinkers who can be trustworthy and take responsibility for class practices and routines. They will also learn how to play and organize the best way to bring an activity, work with others and learn from others. Moreover, young Students still depend on teacher. They should be guided and accompanied well. So keep guiding students with the best way. Teaching Math is fun and let us make them happy to learn Math.
http://www.padjane.com/cencoos-classroom-series-module-1/
4.03125
|This article needs additional citations for verification. (January 2008)| In electronics, a linear regulator is a system used to maintain a steady voltage. The resistance of the regulator varies in accordance with the load resulting in a constant output voltage. The regulating device is made to act like a variable resistor, continuously adjusting a voltage divider network to maintain a constant output voltage, and continually dissipating the difference between the input and regulated voltages as waste heat. By contrast, a switching regulator uses an active device that switches on and off to maintain an average value of output. Because the regulated voltage of a linear regulator must always be lower than input voltage, efficiency is limited and the input voltage must be high enough to always allow the active device to drop some voltage. Linear regulators may place the regulating device in parallel with the load (shunt regulator) or may place the regulating device between the source and the regulated load (a series regulator). Simple linear regulators may only contain a Zener diode and a series resistor; more complicated regulators include separate stages of voltage reference, error amplifier and power pass element. Because a linear voltage regulator is a common element of many devices, integrated circuit regulators are very common. Linear regulators may also be made up of assemblies of discrete solid-state or vacuum tube components. The transistor (or other device) is used as one half of a potential divider to establish the regulated output voltage. The output voltage is compared to a reference voltage to produce a control signal to the transistor which will drive its gate or base. With negative feedback and good choice of compensation, the output voltage is kept reasonably constant. Linear regulators are often inefficient: since the transistor is acting like a resistor, it will waste electrical energy by converting it to heat. In fact, the power loss due to heating in the transistor is the current multiplied by the voltage difference between input and output voltage. The same function can often be performed much more efficiently by a switched-mode power supply, but a linear regulator may be preferred for light loads or where the desired output voltage approaches the source voltage. In these cases, the linear regulator may dissipate less power than a switcher. The linear regulator also has the advantage of not requiring magnetic devices (inductors or transformers) which can be relatively expensive or bulky, being often of simpler design, and being quieter. Some designs of linear regulators use only transistors, diodes and resistors, which are easier to fabriacate into an integrated circuit, further reducing their weight, footprint on a PCB, and price. All linear regulators require an input voltage at least some minimum amount higher than the desired output voltage. That minimum amount is called the dropout voltage. For example, a common regulator such as the 7805 has an output voltage of 5V, but can only maintain this if the input voltage remains above about 7V, before the output voltage begins sagging below the rated output. Its dropout voltage is therefore 7V − 5V = 2V. When the supply voltage is less than about 2V above the desired output voltage, as is the case in low-voltage microprocessor power supplies, so-called low dropout regulators (LDOs) must be used. When the output regulated voltage must be higher than the available input voltage, no linear regulator will work (not even a Low dropout regulator). In this situation, a switching regulator of the "boost" type must be used. Most linear regulators will continue to provide some output voltage approximately the dropout voltage below the input voltage for inputs below the nominal output voltage until the input voltage drops significantly. Linear regulators exist in two basic forms: shunt regulators and series regulators. Most linear regulators have a maximum rated output current. This is generally limited by either power dissipation capability, or by the current carrying capability of the output transistor. The shunt regulator works by providing a path from the supply voltage to ground through a variable resistance (the main transistor is in the "bottom half" of the voltage divider). The current through the shunt regulator is diverted away from the load and flows uselessly to ground, making this form usually less efficient than the series regulator. It is, however, simpler, sometimes consisting of just a voltage-reference diode, and is used in very low-powered circuits where the wasted current is too small to be of concern. This form is very common for voltage reference circuits. A shunt regulator can usually only sink (absorb) current. Series regulators are the more common form. The series regulator works by providing a path from the supply voltage to the load through a variable resistance (the main transistor is in the "top half" of the voltage divider). The power dissipated by the regulating device is equal to the power supply output current times the voltage drop in the regulating device. A series regulator can usually only source (supply) current. Simple shunt regulator The image shows a simple shunt voltage regulator that operates by way of the Zener diode's action of maintaining a constant voltage across itself when the current through it is sufficient to take it into the Zener breakdown region. The resistor R1 supplies the Zener current as well as the load current IR2 (R2 is the load). R1 can be calculated as , where is the Zener voltage, and IR2 is the required load current. This regulator is used for very simple low-power applications where the currents involved are very small and the load is permanently connected across the Zener diode (such as voltage reference or voltage source circuits). Once R1 has been calculated, removing R2 will allow the full load current (plus the Zener current) through the diode and may exceed the diode's maximum current rating, thereby damaging it. The regulation of this circuit is also not very good because the Zener current (and hence the Zener voltage) will vary depending on and inversely depending on the load current. In some designs, the Zener diode may be replaced with another similarly functioning device, especially in an ultra-low-voltage scenario, like (under forward bias) several normal diodes or LEDs in series. Simple series regulator Adding an emitter follower stage to the simple shunt regulator forms a simple series voltage regulator and substantially improves the regulation of the circuit. Here, the load current IR2 is supplied by the transistor whose base is now connected to the Zener diode. Thus the transistor's base current (IB) forms the load current for the Zener diode and is much smaller than the current through R2. This regulator is classified as "series" because the regulating element, viz., the transistor, appears in series with the load. R1 sets the Zener current (IZ) and is determined as where, VZ is the Zener voltage, IB is the transistor's base current, K = 1.2 to 2 (to ensure that R1 is low enough for adequate IB) and where, IR2 is the required load current and is also the transistor's emitter current (assumed to be equal to the collector current) and hFE(min) is the minimum acceptable DC current gain for the transistor. This circuit has much better regulation than the simple shunt regulator, since the base current of the transistor forms a very light load on the Zener, thereby minimising variation in Zener voltage due to variation in the load. Note that the output voltage will always be about 0.65V less than the Zener due to the transistor's VBE drop. Although this circuit has good regulation, it is still sensitive to the load and supply variation. This can be resolved by incorporating negative feedback circuitry into it. This regulator is often used as a "pre-regulator" in more advanced series voltage regulator circuits. The circuit is readily made adjustable by adding a potentiometer across the Zener, moving the transistor base connection from the top of the Zener to the pot wiper. It may be made step adjustable by switching in different Zeners. Finally it is occasionally made microadjustable by adding a low value pot in series with the Zener; this allows a little voltage adjustment, but degrades regulation (see also capacitance multiplier). "Fixed" three-terminal linear regulators are commonly available to generate fixed voltages of plus 3 V, and plus or minus 5 V, 6V, 9 V, 12 V, or 15 V, when the load is less than 1.5 A. The "78xx" series (7805, 7812, etc.) regulate positive voltages while the "79xx" series (7905, 7912, etc.) regulate negative voltages. Often, the last two digits of the device number are the output voltage (e.g., a 7805 is a +5 V regulator, while a 7915 is a −15 V regulator). There are variants on the 78xx series ICs, such as 78L and 78S, some of which can supply up to 2 Amps. Adjusting fixed regulators By adding another circuit element to a fixed voltage IC regulator, it is possible to adjust the output voltage. Two example methods are: - A Zener diode or resistor may be added between the IC's ground terminal and ground. Resistors are acceptable where ground current is constant, but are ill-suited to regulators with varying ground current. By switching in different Zener diodes, diodes or resistors, the output voltage can be adjusted in a step-wise fashion. - A potentiometer can be placed in series with the ground terminal to increase the output voltage variably. However, this method degrades regulation, and is not suitable for regulators with varying ground current. |This section requires expansion. (October 2012)| An adjustable regulator generates a fixed low nominal voltage between its output and its adjust terminal (equivalent to the ground terminal in a fixed regulator). This family of devices includes low power devices like LM723 and medium power devices like LM317 and L200. Some of the variable regulators are available in packages with more than three pins, including dual in-line packages. They offer the capability to adjust the output voltage by using external resistors of specific values. For output voltages not provided by standard fixed regulators and load currents of less than 7 A, commonly available adjustable three-terminal linear regulators may be used. The LM317 series (+1.25V) regulates positive voltages while the LM337 series (−1.25V) regulates negative voltages. The adjustment is performed by constructing a potential divider with its ends between the regulator output and ground, and its centre-tap connected to the 'adjust' terminal of the regulator. The ratio of resistances determines the output voltage using the same feedback mechanisms described earlier. Single IC dual tracking adjustable regulators are available for applications such as op-amp circuits needing matched positive and negative DC supplies.[which?] Some have selectable current limiting as well. Some regulators require a minimum load. Linear IC voltage regulators may include a variety of protection methods: - Current limiting such as constant-current limiting or foldback - Thermal shutdown - Safe operating area protection Sometimes external protection is used, such as crowbar protection. Using a linear regulator Linear regulators can be constructed using discrete components but are usually encountered in integrated circuit forms. The most common linear regulators are three-terminal integrated circuits in the TO-220 package. Common solid-state series voltage regulators are the LM78xx (for positive voltages), LM79xx (for negative voltages), and the AMS1117 (low drop out, for lower positive voltages than LM78xx allows) series. Common fixed voltages are 1.8V, 3.3V (both for low-voltage CMOS logic circuits), 5 V (for transistor-transistor logic circuits) and 12 V (for communications circuits and peripheral devices such as disk drives). In fixed voltage regulators the reference pin is tied to ground, whereas in variable regulators the reference pin is connected to the centre point of a fixed or variable voltage divider fed by the regulator's output. A variable voltage divider such as a potentiometer allows the user to adjust the regulated voltage. - Voltage regulator - Bandgap voltage reference - List of LM-series integrated circuits - Brokaw bandgap reference - Switched-mode power supply - Low-dropout regulator - When I[who?] designed my AM pocket radio powered by a 3.7 V lithium-ion battery, the 1.5–1.8 V power supply required by the TA7642 chip was provided using a Zener regulator using a red LED (with a forward voltage of 1.7 V) in forward in place of the Zener diode. This LED also doubled as the power indicator. - , Datasheet of L78xx Showing a model that can output 2 A - "Zener regulator" at Hyperphysics - Linear voltage regulator tutorial video in HD Includes practical examples. - ECE 327: LM317 Bandgap Voltage Reference Example — Brief explanation of the temperature-independent bandgap reference circuit within the LM317. - ECE 327: Procedures for Voltage Regulators Lab — Gives schematics, explanations, and analyses for Zener shunt regulator, series regulator, feedback series regulator, feedback series regulator with current limiting, and feedback series regulator with current foldback. Also discusses the proper use of the LM317 integrated circuit bandgap voltage reference and bypass capacitors. - ECE 327: Report Strategies for Voltage Regulators Lab — Gives more-detailed quantitative analysis of behavior of several shunt and series regulators in and out of normal operating ranges. - "7A SPX1580 regulator"
https://en.wikipedia.org/wiki/Linear_regulator
4.3125
Glaciers form when multiple snowfalls in mountainous or polar regions turn into ice that flows across the land, powered by gravity. They are present throughout the year. Glaciers are so powerful they can change the shape of mountain valleys. As a glacier flows down a valley it wears away the rock and changes it from a typical V-shape, created by river erosion, to a U-shape. This characteristic U-shape makes it easy to spot ancient glacial valleys. Scientists are particularly interested in merged large glaciers called ice caps or ice sheets in Greenland and Antarctica. They are investigating how manmade climate change is affecting these vast reservoirs of fresh water. Image: Wright Glacier in Alaska, extending from the Juneau Ice Field to a glacial lake (credit: Gregory G. Dimijian, M.D./SPL) See amazing footage of a glacier moving towards the sea. Frozen Planet programme maker Jeff Wilson describes a fast moving glacier moving towards the sea. Manmade tunnels lead to the Svartisen glacier's beautiful underside. Dr Iain Stewart follows manmade tunnels that take him deep below the Svartisen glacier in Norway to see why moving sheets of ice are so powerful. Sound recordings from deep within the ice reveal movement. Dr Iain Stewart introduces sound recordings made deep within glaciers. It is possible to hear the glaciers creak and groan as they move downhill. Ice that flows across the land shapes our planet's surface. Dr Iain Stewart explains how annual snowfall accumulations are gradually compacted to form layers of hard ice that in turn form glaciers. Many of the world's glaciers are retreating. Dr Iain Stewart describes the retreat of many of the Earth's glaciers and the break up of polar ice sheets. A glacier (US // or UK //) is a persistent body of dense ice that is constantly moving under its own weight; it forms where the accumulation of snow exceeds its ablation (melting and sublimation) over many years, often centuries. Glaciers slowly deform and flow due to stresses induced by their weight, creating crevasses, seracs, and other distinguishing features. They also abrade rock and debris from their substrate to create landforms such as cirques and moraines. Glaciers form only on land and are distinct from the much thinner sea ice and lake ice that form on the surface of bodies of water. On Earth, 99% of glacial ice is contained within vast ice sheets in the polar regions, but glaciers may be found in mountain ranges on every continent except Australia, and on a few high-latitude oceanic islands. Between 35°N and 35°S, glaciers occur only in the Himalayas, Andes, Rocky Mountains, a few high mountains in East Africa, Mexico, New Guinea and on Zard Kuh in Iran. Glaciers cover about 10 percent of Earth's land surface. Continental glaciers cover nearly 5 million square miles or about 98 percent of Antarctica's 5.1 million square miles, with an average thickness of 7,000 feet (2,100 m). Greenland and Patagonia also have huge expanses of continental glaciers. Glacial ice is the largest reservoir of freshwater on Earth. Many glaciers from temperate, alpine and seasonal polar climates store water as ice during the colder seasons and release it later in the form of meltwater as warmer summer temperatures cause the glacier to melt, creating a water source that is especially important for plants, animals and human uses when other sources may be scant. Within high altitude and Antarctic environments, the seasonal temperature difference is often not sufficient to release meltwater. Because glacial mass is affected by long-term climate changes, e.g., precipitation, mean temperature, and cloud cover, glacial mass changes are considered among the most sensitive indicators of climate change and are a major source of variations in sea level. A large piece of compressed ice, or a glacier, appears blue as large quantities of water appear blue. This is because water molecules absorb other colors more efficiently than blue. The other reason for the blue color of glaciers is the lack of air bubbles. Air bubbles, which give a white color to ice, are squeezed out by pressure increasing the density of the created ice.
http://www.bbc.co.uk/science/earth/water_and_ice/glacier
4.5
Like other conic sections, hyperbolas can be created by "slicing" a cone and looking at the cross-section. Unlike other conics, hyperbolas actually require 2 cones stacked on top of each other, point to point. The shape is the result of effectively creating a parabola out of both cones at the same time. So the question is, should hyperbolas really be considered a shape all their own? Or are they just two parabolas graphed at the same time? Could "different" shapes be made from any of the other conic sections if two cones were used at the same time? - Khan Academy: Conic Sections Hyperbolas 2 In addition to their focal property, hyperbolas also have another interesting geometric property. Unlike a parabola, a hyperbola becomes infinitesimally close to a certain line as the x− or y−coordinates approach infinity. What we mean by “infinitesimally close?” Here we mean two things: 1) The further you go along the curve, the closer you get to the asymptote, and 2) If you name a distance, no matter how small, eventually the curve will be that close to the asymptote. Or, using the language of limits, as we go further from the vertex of the hyperbola the limit of the distance between the hyperbola and the asymptote is 0. These lines are called asymptotes. There are two asymptotes, and they cross at the point at which the hyperbola is centered: For a hyperbola of the form x2a2−y2b2=1, the asymptotes are the lines: y=bax and y=−bax. For a hyperbola of the form y2a2−x2b2=1 the asymptotes are the lines: y=abx and y=−abx. (For a shifted hyperbola, the asymptotes shift accordingly.) Graph the following hyperbola, drawing its foci and asymptotes and using them to create a better drawing: 9x2−36x−4y2−16y−16=0. First, we put the hyperbola into the standard form: So a=2, b=3 and c=4+9−−−−√=13−−√. The hyperbola is horizontally oriented, centered at the point (2,-2), with foci at (2+13−−√,−2) and (2−13−−√,−2). After taking shifting into consideration, the asymptotes are the lines: y+2=32(x−2) and y+2=−32(x−2). So graphing the vertices and a few points on either side, we see the hyperbola looks something like this: Graph the following hyperbola, drawing its foci and asymptotes and using them to create a better drawing: 16x2−96x−9y2−36y−84=0 Graph the following hyperbola, drawing its foci and asymptotes and using them to create a better drawing: y2−14y−25x2−200x−376=0 Concept question wrap-up Hyperbolas are considered different shapes, because there are specific behaviors that are unique to hyperbolas. Also, though hyperbolas are the result of dual parabolas, none of the other conics really create unique shapes with dual cones - just double figures - and in any case require multiple "slices". A hyperbola is a conic section where the cutting plane intersects both sides of the cone, resulting in two infinite “U”-shapes curves. An unbounded shape is so large that no circle, no matter how large, can enclose the shape. An asymptote is a line which a curve approaches as the curve and the line approach infinity, eventually becoming closer than any given positive number. A perpendicular hyperbola is a hyperbola where the asymptotes are perpendicular. 1) Find the equation for a hyperbola with asymptotes of slopes 512 and −512, and foci at points (2,11) and (2,1). 2) A hyperbola with perpendicular asymptotes is called perpendicular. What does the equation of a perpendicular hyperbola look like? 3) Find an equation of the hyperbola with x-intercepts at x = –7 and x = 5, and foci at (–6, 0) and (4, 0). 2) The slopes of perpendicular lines are negative reciprocals of each other. This means that ab=ba, which, for positive a and b means a=b 3) To find the equation: The foci have the same y-coordinates, so this is a left/right hyperbola with the center, foci, and vertices on a line paralleling the x-axis. Since it is a left/right hyperbola, the y part of the equation will be negative and equation will lead with the x2 term (since the leading term is positive by convention and the squared term must have different signs if this is a hyperbola). :The center is midway between the foci, so the center (h,k)=(−1,0). The foci c are 5 units to either side of the center, so c=5→c2=25 The x-intercepts are 4 units to either side of the center, and the foci are on the x-axis so the intercepts must be the vertices a a=4→a2=16 Use Pythagoras a2+b2=c2 to get b2=25−16=9 Substitute the calculated values into the standard form (x−h)2a−(y−k)2b=1 to get Find the equations of the asymptotes Graph the hyperbolas, give the equation of the asymptotes, use the asymptotes to enhance the accuracy of your graph.
http://www.ck12.org/book/CK-12-Math-Analysis-Concepts/r4/section/6.7/
4.03125
committee, one or more persons appointed or elected to consider, report on, or take action on a particular matter. Because of the advantages of a division of labor, legislative committees of various kinds have assumed much of the work of legislatures in many nations. Standing committees are appointed in both houses of the U.S. Congress at the beginning of every session to deal with bills in the different specific classes. Important congressional committees include ways and means; appropriations; commerce; armed services; foreign relations; and judiciary. The number, but not the scope, of the committees was much reduced in 1946. Since then there has been a large increase in the number of subcommittees, which have become steadily more important. Members of committees are in effect elected by caucuses of the two major parties in Congress; the majority party is given the chairmanship and majority on each committee, and chairmanships, as well as membership on important committees, are influenced by seniority, but seniority is no longer the sole deciding factor and others may override it. The presiding officer of either house may appoint special committees, including those of investigation, which have the power to summon witnesses and compel the submission of evidence. The presiding officers also appoint committees of conference to obtain agreement between the two houses on the content of bills of the same general character. The U.S. legislative committee system conducts most congressional business through its powers of scrutiny and investigation of government departments. In France the constitution of the Fifth Republic permits each legislative chamber to have no more than six standing committees. Because these committees are large, unofficial committees have formed that do much of the real work of examining bills. As in the U.S. government, these committees are quite powerful because of their ability to delay legislation. In Great Britain devices such as committees of the whole are used in the consideration of money bills and there are large standing committees of the House of Commons, but committees have not been very important in the British legislature. Recently attempts have been made to form specialized committees. See L. A. Froman, The Congressional Process (1967); G. Goodwin, Jr., The Little Legislatures (1970); Congressional Quarterly, Guide to Congress (3d ed. 1982). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
http://www.factmonster.com/encyclopedia/history/committee.html
4.21875
Gary Rockswold teaches algebra in context, answering the question, “Why am I learning this?” By experiencing math through applications, students see how it fits into their lives, and they become motivated to succeed. Rockswold’s focus on conceptual understanding helps students make connections between the concepts and as a result, students see the bigger picture of math and are prepared for future courses. This streamlined text covers linear, quadratic, nonlinear, exponential, and logarithmic functions and systems of equations and inequalities, which gets to the heart of what students need from this course. A more comprehensive college algebra text is also available. Table of Contents 1. Introduction to Functions and Graphs 1.1 Numbers, Data, and Problem Solving 1.2 Visualizing and Graphing Data 1.3 Functions and Their Representations 1.4 Types of Functions 1.5 Functions and Their Rates of Change 2. Linear Functions and Equations 2.1 Linear Functions and Models 2.2 Equations of Lines 2.3 Linear Equations 2.4 Linear Inequalities 2.5 Absolute Value Equations and Inequalities 3. Quadratic Functions and Equations 3.1 Quadratic Functions and Models 3.2 Quadratic Equations and Problem Solving 3.3 Complex Numbers 3.4 Quadratic Inequalities 3.5 Transformations of Graphs 4. More Nonlinear Functions and Equations 4.1 More Nonlinear Functions and Their Graphs 4.2 Polynomial Functions and Models 4.3 Division of Polynomials 4.4 Real Zeros of Polynomial Functions 4.5 The Fundamental Theorem of Algebra 4.6 Rational Functions and Models 4.7 More Equations and Inequalities 4.8 Radical Equations and Power Functions 5. Exponential and Logarithmic Functions 5.1 Combining Functions 5.2 Inverse Functions and Their Representations 5.3 Exponential Functions and Models 5.4 Logarithmic Functions and Models 5.5 Properties of Logarithms 5.6 Exponential and Logarithmic Equations 5.7 Constructing Nonlinear Models 6. Systems of Equations and Inequalities 6.1 Functions and Systems of Equations in Two Variables 6.2 Systems of Inequalities in Two Variables 6.3 Systems of Linear Equations in Three Variables 6.4 Solutions to Linear Systems Using Matrices 6.5 Properties and Applications of Matrices 6.6 Inverses of Matrices Reference: Basic Concepts from Algebra and Geometry R.1 Formulas from Geometry R.2 Integer Exponents R.3 Polynomial Expressions R.4 Factoring Polynomials R.5 Rational Expressions R.6 Radical Notation and Rational Exponents R.7 Radical Expressions Appendix A: Using the Graphing Calculator Appendix B: A Library of Functions Appendix C: Partial Fractions Digital Choices ? MyLab & Mastering with Pearson eText is a complete digital substitute for a print value pack at a lower price. MyLab & Mastering ? MyLab & Mastering products deliver customizable content and highly personalized study paths, responsive learning tools, and real-time evaluation and diagnostics. MyLab & Mastering products help move students toward the moment that matters most—the moment of true understanding and learning. $99.95 | ISBN-13: 978-0-321-72670-4 $60.50 | ISBN-13: 978-0-321-73052-7 With VitalSource eTextbooks, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs. Access your course materials on iPad, Android and Kindle devices with VitalSource Bookshelf, the textbook e-reader that helps you read, study and learn brilliantly. Features include: - See all of your eTextbooks at a glance and access them instantly anywhere, anytime from your Bookshelf - no backpack required. - Multiple ways to move between pages and sections including linked Table of Contents and Search make navigating eTextbooks a snap. - Highlight text with one click in your choice of colors. Add notes to highlighted passages. Even subscribe to your classmates' and instructors' highlights and notes to view in your book. - Scale images and text to any size with multi-level zoom without losing page clarity. Customize your page display and reading experience to create a personal learning experience that best suits you. - Print only the pages you need within limits set by publisher - Supports course materials that include rich media and interactivity like videos and quizzes - Easily copy/paste text passages for homework and papers - Supports assistive technologies for accessibility by vision and hearing impaired users $73.99 | ISBN-13: 978-0-321-83073-9 Loose Leaf Version ? Books a la Carte are less-expensive, loose-leaf versions of the same textbook. $122.67 | ISBN-13: 978-0-321-72672-8
http://www.mypearsonstore.com/bookstore/essentials-of-college-algebra-with-modeling-and-visualization-0321715284
4.125
Snow and Ice Overview A tenth of Earth’s land surface is permanently occupied by ice sheets or glaciers, but the domain of the cryosphere – that part of the world where snow and ice can form – extends three times further still. The cryosphere is an important regulator of global climate, its bright albedo reflecting sunlight back to space and its presence influencing regional weather and global ocean currents. Some 77% of the globes freshwater are bound up within the ice – but the cryosphere appears disproportionately sensitive to the effects of global warming. Imaging radar systems like those of ERS and Envisat pierce through clouds or seasonal darkness to chart ice extent, possessing sensitivity to different ice types – from kilometers - thick ice sheets to new-born floating ‘pancake’ ice – supplemented by optical observations. Radar altimeters gather data on changing ice height and mass: in 2009 ESA launched CryoSat-2 as the first altimetry mission specifically designed to accurately measure the thickness of sea ice and land ice margins. Snow and Ice News 21 December 2015 How can access to Sentinel data increase Canada's ability to offer improved information on sea ice? 17 December 2015 Using data from ESA's CryoSat mission, scientists have produced the best maps yet of the changing height of Earth's biggest ice sheets. Specific Topics on Snow and Ice The enormous permafrost areas of the world show seasonal change which has impact on not only vegetation and hydrological cycles, but also on the planning and safety of huge gas and oil pipelines which traverse these areas. Sea ice is formed from ocean water that freezes, whether along coasts or to the sea floor (fast ice) or floating on the surface (drift ice) or packed together (pack ice). The most important areas of pack ice are the polar ice packs. Because of vast amounts of water added to or removed from the oceans and atmosphere, the behavior of polar ice packs have a significant impact of the global changes in climate. Related Data Types Related (Key) Documentation Related Research Results
https://earth.esa.int/web/guest/earth-topics/snow-and-ice
4.03125
MARK J. ROZELL The Constitutional Convention of 1787 formally created the American presidency. George Washington put the office into effect. Indeed, Washington was very cognizant of the fact that his actions as president would establish the office and have consequences for his successors. The first president’s own words evidenced how conscious he was of the crucial role he played in determining the makeup of the office of the presidency. He had written to James Madison, ‘‘As the first of everything, in our situation will serve to establish a precedent, it is devoutly wished on my part that these precedents be fixed on true principles.’’ 1 In May 1789 Washington wrote, ‘‘Many things which appear of little importance in themselves and at the beginning, may have great and durable consequences from their having been established at the commencement of a new general government.’’ 2 All presidents experience the burdens of the office. Washington’s burdens were unique in that only he had the responsibility to establish the office in practice. The costs of misjudgments to the future of the presidency were great. The parameters of the president’s powers remained vague when Washington took office. The executive article of the Constitution (Article II) lacked the specificity of the legislative article (Article I), leaving the first occupant of the presidency imperfect guidance on the scope and limits of his authority. Indeed, it may very well have been because Washington was the obvious choice of first occupant of the office that the Constitutional Convention officers left many of the powers of the presidency vague. Willard Sterne Randall writes, ‘‘No doubt no other president would have been trusted with such latitude.’’ 3 Acutely aware of his burdens, Washington set out to exercise his powers prudently yet firmly when necessary. Perhaps Washington’s greatest legacy to the presidency was his substantial
https://www.questia.com/read/101007847/george-washington-and-the-origins-of-the-american
4
Impressionism developed in France in the nineteenth century and is based on the practice of painting out of doors and spontaneously ‘on the spot’ rather than in a studio from sketches. Main impressionist subjects were landscapes and scenes of everyday life Impressionism was developed by Claude Monet and other Paris-based artists from the early 1860s. (Though the process of painting on the spot can be said to have been pioneered in Britain by John Constablein around 1813–17 through his desire to paint nature in a realistic way). Instead of painting in a studio, the impressionists found that they could capture the momentary and transient effects of sunlight by working quickly, in front of their subjects, in the open air (en plein air) rather than in a studio. This resulted in a greater awareness of light and colour and the shifting pattern of the natural scene. Brushwork became rapid and broken into separate dabs in order to render the fleeting quality of light. The first group exhibition was in Paris in 1874 and included work by Monet, Auguste Renoir, Edgar Degas and Paul Cezanne. The work shown was greeted with derision with Monet’s Impression, Sunrise particularly singled out for ridicule and giving its name (used by critics as an insult) to the movement. Seven further exhibitions were then held at intervals until 1886. River of dreams In this article, art historian John House and filmmaker Patrick Keiller talk about how London’s light has had an impact on the depiction of its river, referencing the work of Monet, Whistler and Turner. Impressionism at Tate - Take a look at impressionism in Tate’s collection - Or browse the selection of works in the slideshow below Many of the core impressionist artists have all had exhibitions at Tate. These online exhibition guides provide an introduction to their work. - Paul Cézanne: an Exhibition of Watercolours (11 April 1946 – 12 May 1946) - Paintings by Cézanne (29 September 1954 – 27 October 1954) - Degas (20 September 1952 – 19 October 1952) - Degas, Sickert and Toulouse-Lautrec (5 October 2005 – 15 January 2006) - Claude Monet (26 September 1957 – 3 November 1957) - Oil Paintings by Camille Pissarro (27 June 1931 – 3 October 1931) - Renoir (25 September 1953 – 25 October 1953) Monet in focus When Monet’s paintings first appeared they must have looked absolutely astonishing…those lurid artificial colours must have seemed as though they had come from out of space or something Beauty, power and space Jeremy Lewison, the curator of Turner Monet Twombly, explores the influence of Turner on Monet’s work. Monet had admired Turner’s paintings in 1871 when he was in London with Camille Pissarro. Curator Jeremy Lewison explains why Monet created a lilly pond in his back garden and the sadness behind Monet’s iconic impressionist pieces. Impressionism in context Impressionism reached prominence in during the 1870s and 1880s. Watch curator Allison Smith discuss what else was happening at the time in the art world. Impressionism in Britain It’s the picture that made impressionism excepted in England. Caroline Corbeau-Parsons, Assistant Curator, British Art, 1850–1915 Impressionism for kids These blog posts, games and activites are a fun and simple way to introduce impressionism to kids, whether in the classroom or at home. What is impressionism? Who were the key impressionist artists and why was the weather important to them? This Tate Kids blog post answers these important questions. Who is Paul Gauguin? …and why did his travels influence his work? This piece explains all. Who is Edgar Degas? Did you know Degas was supposed to be a lawyer? This blog post looks at who Degas was and his famous artworks. Play and create They will love bringing Degas’ Little Dancer to life with this extraordinary game. Inspire kids to use brushstrokes and markers like the impressionists, with this airbrush game.
http://www.tate.org.uk/learn/online-resources/glossary/i/impressionism
4.03125
August 7, 2012 Carnegie Airborne Observatory Helps Manage Elephants April Flowers for redOrbit.com - Your Universe Online Scientists have debated how big a role elephants play in toppling trees in South African savannas for years. Now, using some very high tech airborne equipment, they finally have an answer.Tree loss is a natural process, but in some regions it is increasing beyond what could naturally be expected. This extreme tree loss has cascading effects on the habitats of many species. Studying savannas across Kruger National Park, Carnegie scientists have quantitatively determined tree losses for the first time. The team found that elephants, as previously thought, are the primary agents of tree loss. Their browsing habits knock trees over at a rate averaging 6 times higher than in areas that are inaccessible to elephants. The study, published in Ecology Letters, found that elephants prefer trees in the 16 to 30 foot height range, with annual losses of up to 20% at that size. The findings of this study will bolster our understanding of elephant and savanna conservation needs. "Previous field studies gave us important clues that elephants are a key driver of tree losses, but our airborne 3-D mapping approach was the only way to fully understand the impacts of elephants across a wide range of environmental conditions found in savannas," commented lead author Greg Asner of Carnegie's Department of Global Ecology. "Our maps show that elephants clearly toppled medium-sized trees, creating an "elephant trap" for the vegetation. These elephant-driven tree losses have a ripple effect across the ecosystem, including how much carbon is sequestered from the atmosphere." Previously, researchers used aerial photography and field-based approaches to quantifying the tree loss and the impact of elephant browsing. This team used Light Detection and Ranging (LiDAR), mounted on the fixed wing of Carnegie Airborne Observatory (CAO). The LiDAR provides detailed 3-D images of the vegetation canopy at tree-level resolution using laser pulses that sweep across the African savanna. Able to detect even small changes in individual tree height, CAO's vast coverage is far superior to previous methods. Using four study landscapes within Kruger National Park, and in very large areas fenced off to prevent herbivore entry, the scientists considered an array of environmental variables. There are six such enclosures, four of which keep out all herbivores larger than a rabbit, and two which allow herbivores smaller than elephants. The team identified and monitored 58,00 individual trees from the air across this landscape in 2008 and again in 2010. They found that nearly 9% of the trees decreased in height in two years. They also mapped treefall changes and linked them to different climate and terrain conditions. Most of the tree loss occurred in lowland areas with higher moisture and on soils high in nutrients that harbor trees preferred by elephants for browsing. Comparison with the herbivore free enclosures definitively identified elephants, as opposed to other herbivores or fire, as the major agent in tree losses over the two year study period. "These spatially explicit patterns of treefall highlight the challenges faced by conservation area managers in Africa, who must know where and how their decisions impact ecosystem health and biodiversity. They should rely on rigorous science to evaluate alternative scenarios and management options, and the CAO helps provide the necessary quantification," commented co-author Shaun Levick. Danie Pienaar, head of scientific services of the South African National Parks remarked, "This collaboration between external scientists and conservation managers has led to exciting and ground-breaking new insights to long-standing questions and challenges. Knowing where increasing elephant impacts occur in sensitive landscapes allows park managers to take appropriate and focused action. These questions have been difficult to assess with conventional ground-based field approaches over large scales such as those in Kruger National Park."
http://www.redorbit.com/news/science/1112670642/carnegie-airborne-observatory-elephants-080712/
4.1875
In the classical world, large-scale, freestanding statues were among the most highly valued and thoughtfully positioned works of art. Sculpted in the round, and commonly made of bronze or stone, statues embodied human, divine, and mythological beings, as well as animals. Our understanding of where and how they were displayed relies on references in ancient texts and inscriptions, and images on coins, reliefs, vases (08.258.25; 06.1021.230), and wall paintings (03.14.13), as well as on the archaeological remains of monuments and sites. Even in the most carefully excavated and well-preserved locations, bases usually survive without their corresponding statues; dispersed fragments of heads and bodies provide little indication of the visual spectacle of which they once formed a part. Numerous statues exhibited at the Metropolitan Museum, for example, have known histories of display in old European collections, but their ancient contexts can only be conjectured (03.12.13; 03.12.14). Among the earliest Greek statues were images of divinities housed in temples, settings well suited to communicate their religious potency. The Greeks situated these standing or seated figures, which often wore real clothing and held objects associated with their unique powers, on axis with temple entrances for maximum visual impact. By the mid-seventh century B.C., rigidly upright statues in stone, referred to as kouroi (youths) and korai (maidens), marked gravesites and were dedicated to the gods as votive offerings in sanctuaries (32.11.1). Greek sanctuaries were sacred, bounded areas, typically encompassing an altar and one or more temples. Evidence for a range of display contexts for statues is more extensive for the Classical period. Public spaces ornamented with statues included open places of assembly like the Athenian agora (06.311; 26.60.1), temples, altars, gateways, and cemeteries (44.11.2,3). Statues of athletic contest winners were often erected at large sanctuaries such as Olympia and Delphi, or sometimes in the victors’ hometowns (25.78.56). Throughout the Archaic and Classical periods, however, the focus of statue production and display remained the representation and veneration of the gods. From the sixth to fifth centuries B.C., hundreds of statues were erected in honor of Athena on the Athenian Akropolis. Whether in a shrine or temple, or in a public space less overtly religious, such as the Athenian agora, statues of deities were reminders of the influence and special protection of the gods, which permeated all aspects of Greek life. By the end of the fifth century B.C., a few wealthy patrons began to exhibit panel paintings, murals, wall hangings, and mosaics in their houses. Ancient authors are vocal in their condemnation of the private ownership and display of such art objects as inappropriate luxuries (for example, Plato’s Republic, 372 D–373 A), yet pass no judgment on the statuettes these patrons also exhibited at home. The difference in attitude suggests that even in private space, statues retained religious significance. The ideas and conventions governing the exhibition of statues were as reliant on political affairs as they were on religion. Thus the changes in state formation set in motion by Alexander the Great’s conquest of the eastern Mediterranean brought about new and important developments in statue display. During the Hellenistic period, portrait statuary provided a means of communicating across great distances both the concept of government by a single ruler and the particular identities of Hellenistic dynasts. These portraits, which blend together traditional, idealized features with particularized details that promote individual recognition, were a prominent feature of sanctuaries dedicated to ruler cults (2002.66). Elaborate victory monuments showcased statues of both triumphant (2003.407.7) and defeated warriors. Well-preserved examples of such monuments have been discovered at Pergamon, in the northwestern region of modern-day Turkey. For the first time, nonidealized human forms, including the elderly and infirm, became popular subjects for large-scale sculptures. Statues of this kind were offered as votives in temples and sanctuaries (09.39). The extensive collections housed within the palaces of Hellenistic dynasts became influential models for generals and politicians in far-away Rome, who coveted such displays of power, prestige, and cultural sophistication. During the early Roman Republic, the principal types of statue display were divinities enshrined in temples and other images of gods taken as spoils of war from the neighboring communities that Rome fought in battle. The latter were exhibited in public spaces alongside commemorative portraits. Roman portraiture yielded two major sculptural innovations: “verism” and the portrait bust. Both probably had their origins in the funerary practices of the Roman nobility, who displayed death masks of their ancestors at home in their atria and paraded them through the city on holidays. Initially, only elected officials and former elected officials were allowed the honor of having their portrait statues occupy public spaces. As is clear from many of the inscriptions accompanying portrait statues, which assert that they should be erected in prominent places, location was of crucial importance. Civic buildings such as council houses and public libraries were enviable locations for display. Statues of the most esteemed individuals were on view by the rostra or speaker’s platform in the Roman Forum. In addition to contemporary statesmen, the subjects of Roman portraiture also included great men of the past, philosophers and writers, and mythological figures associated with particular sites. Beginning in the third century B.C., victorious Roman generals during the conquest of Magna Graecia (present-day southern Italy and Sicily) and the Greek East brought back with them not just works of art, but also exposure to elaborate Hellenistic architectural environments, which they desired to emulate. If granted a triumph by the senate, generals constructed and consecrated public buildings to commemorate their conquests and house the spoils. By the late Republic, statues adorned basilicas, sanctuaries and shrines, temples, theaters, and baths. As individuals became increasingly enriched through the process of conquest and empire, statues also became an important means of conveying wealth and sophistication in the private sphere: sculptural displays filled the gardens and porticoes of urban houses and country villas (09.39; 1992.11.71). The fantastical vistas depicted in luxurious domestic wall paintings included images of statues as well (03.14.13). After the transition from republic to empire, the opportunity to undertake large-scale public building projects in the city of Rome was all but limited to members of the imperial family. Augustus, however, initiated a program of construction that created many more locations for the display of statues. In the Forum of Augustus, the historical heroes of the Republic appeared alongside representations of Augustus (07.286.115) and his human, legendary, and divine ancestors. Augustus publicized his own image to an extent previously unimaginable, through official portrait statues and busts, as well as images on coins. Statues of Augustus and the subsequent emperors were copied and exhibited throughout the empire. Wealthy citizens incorporated features of imperial portraiture into statues of themselves (14.130.1). Roman governors were honored by portrait statues in provincial cities and sanctuaries. The most numerous and finely crafted portraits that survive from the imperial period, however, portray the emperors and their families (26.229). Summoning up the image of a “forest” of statues or a second “population” within the city, the sheer number of statues on display in imperial Rome dwarfs anything seen before or since. Many of the types of statues used in Roman decoration are familiar from the Greek and Hellenistic past: these include portraits of Hellenistic kings and Greek intellectuals, as well as so-called ideal or idealizing figures that represent divinities, mythological figures, heroes, and athletes. The relationship of such statues to Greek models varies from work to work. A number of those displayed in prestigious locations in Rome were transplanted Greek masterpieces, such as the Venus sculpted by Praxiteles in the fourth century B.C. for the inhabitants of the Greek island of Cos, which was set up in Rome’s Temple of Peace, a museumlike structure set aside for the display of art. More often, however, the relationship to an original is one of either close copying or eclectic and inventive adaptation. Some of these copies and adaptations were genuine imports, but many others were made locally by foreign, mainly Greek, craftsmen. A means of display highly characteristic of the Roman empire was the arrangement of statues in tiers of niches adorning public buildings, including baths (03.12.13; 03.12.14), theaters, and amphitheaters. Several of the most impressive surviving statuary displays come from ornamental facades constructed in the Eastern provinces (Library of Celsus, Ephesus, Turkey). Bolstered by wealth drawn from around the Mediterranean, the imperial families established their own palace culture that was later emulated by kings and emperors throughout Europe. Exemplary of the lavish sculptural displays that decorated imperial residences is the statuary spectacle inside a cave employed as a summer dining room of the palace of Tiberius at Sperlonga, on the southern coast of Italy. Visitors to this cave were confronted with a panoramic view of groups of full-scale statues reenacting episodes from Odysseus’ legendary travels. Few statues from antiquity have survived both in situ and intact, but the evidence suggests an ever-changing and expanding range of contexts for their display. The exhibition of statues in the Greek and Roman Galleries of the Metropolitan Museum allows them to be seen in close proximity to one another and exploits the capacity of natural light to reveal varying aspects of their beauty over the course of each day. Nichols, Marden. “Contexts for the Display of Statues in Classical Antiquity.” In Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art, 2000–. http://www.metmuseum.org/toah/hd/disp/hd_disp.htm (April 2010) Harward, Vernon J. "Greek Domestic Sculpture and the Origins of Private Art Patronage." Ph.D. diss., Harvard University, 1982. Hemingway, Seán A. The Horse and Jockey from Artemision. Berkeley: University of California Press, 2004. Kleiner, Diana E. E. Roman Sculpture. New Haven: Yale University Press, 1992. Richter, Gisela M. A. Catalogue of Greek Sculptures in the Metropolitan Museum of Art. Cambridge, Mass.: Harvard University Press, 1954. Ridgway, Brinilde S. "The Setting of Greek Sculpture." Hesperia 40, no. 3 (1971), pp. 336–56. Stewart, Peter. Statues in Roman Society: Representation and Response. Oxford: Oxford University Press, 2003.
http://metmuseum.org/toah/hd/disp/hd_disp.htm
4
This web page contains three interactive tutorials for secondary learners on the common chemicals and molecular compounds found in everyday life. The first tutorial, House and Garden is appropriate for Grades 4-6. The second, Do You Know Your Molecules, is an interactive problem set for Grades 6-9. The last tutorial, Symmetry and Point Groups, is targeted to high school chemistry and preparatory chemistry courses. Reciprocal Net is a database of information about molecular structures. The project involves research scientists from a number of universities who collaborate to provide educators, students, and the general public with learning tools related to crystallography, chemistry, and biochemistry. biochemicals, chemical symmetry, chemistry tutorial, compounds, crystallography, molecular structure, molecular structure tutorial, molecule, point groups Metadata instance created August 19, 2011 by Caroline Hall August 19, 2011 by Caroline Hall Last Update when Cataloged: December 12, 2009 AAAS Benchmark Alignments (2008 Version) 4. The Physical Setting 4D. The Structure of Matter 3-5: 4D/E4a. When a new material is made by combining two or more materials, it has properties that are different from the original materials. 3-5: 4D/E6. All materials have certain physical properties, such as strength, hardness, flexibility, durability, resistance to water and fire, and ease of conducting heat. 6-8: 4D/M1a. All matter is made up of atoms, which are far too small to see directly through a microscope. 6-8: 4D/M1cd. Atoms may link together in well-defined molecules, or may be packed together in crystal patterns. Different arrangements of atoms into groups compose all substances and determine the characteristic properties of substances. 6-8: 4D/M5. Chemical elements are those substances that do not break down during normal laboratory reactions involving such treatments as heating, exposure to electric current, or reaction with acids. All substances from living and nonliving things can be broken down to a set of about 100 elements, but since most elements tend to combine with others, few elements are found in their pure form. 6-8: 4D/M6c. Carbon and hydrogen are common elements of living matter. 6-8: 4D/M10. A substance has characteristic properties such as density, a boiling point, and solubility, all of which are independent of the amount of the substance and can be used to identify it. 6-8: 4D/M11. Substances react chemically in characteristic ways with other substances to form new substances with different characteristic properties. 9-12: 4D/H7b. An enormous variety of biological, chemical, and physical phenomena can be explained by changes in the arrangement and motion of atoms and molecules. 9-12: 4D/H8. The configuration of atoms in a molecule determines the molecule's properties. Shapes are particularly important in how large molecules interact with others. %0 Electronic Source %A Huffman, John %D December 12, 2009 %T Reciprocal Net: Crystals and Chemicals in Everyday Life %V 2016 %N 8 February 2016 %8 December 12, 2009 %9 text/html %U http://www.reciprocalnet.org/edumodules/chemistry/index.html Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
http://www.thephysicsfront.org/items/detail.cfm?ID=11407
4.25
Usually, when we talk about force, there is more than one force involved, and these forces are applied in different directions. Let's look at a diagram of a car. When the car is sitting still, gravity exerts a downward force on the car (this force acts everywhere on the car, but for simplicity, we can draw the force at the car's center of mass). But the ground exerts an equal and opposite upward force on the tires, so the car does not move. This content is not compatible on this device. Figure 1. Animation of forces on a car When the car begins to accelerate, some new forces come into play. The rear wheels exert a force against the ground in a horizontal direction; this makes the car start to accelerate. When the car is moving slowly, almost all of the force goes into accelerating the car. The car resists this acceleration with a force that is equal to its mass multiplied by its acceleration. You can see in Figure 1 how the force arrow starts out large because the car accelerates rapidly at first. As it starts to move, the air exerts a force against the car, which grows larger as the car gains speed. This aerodynamic drag force acts in the opposite direction of the force of the tires, which is propelling the car, so it subtracts from that force, leaving less force available for acceleration. Eventually, the car will reach its top speed, the point at which it cannot accelerate any more. At this point, the driving force is equal to the aerodynamic drag, and no force is left over to accelerate the car.
http://auto.howstuffworks.com/auto-parts/towing/towing-capacity/information/fpte3.htm
4.34375
A gravitational lens refers to a distribution of matter (such as a cluster of galaxies) between a distant source and an observer, that is capable of bending the light from the source, as it travels towards the observer. This effect is known as gravitational lensing and the amount of bending is one of the predictions of Albert Einstein's general theory of relativity. (Classical physics also predicts bending of light, but only half that of general relativity's.) Although Orest Khvolson (1924) or Frantisek Klin (1936) are sometimes credited as being the first ones to discuss the effect in print, the effect is more commonly associated with Einstein, who published a more famous article on the subject in 1936. Fritz Zwicky posited in 1937 that the effect could allow galaxy clusters to act as gravitational lenses. It was not until 1979 that this effect was confirmed by observation of the so-called "Twin QSO" SBS 0957+561. Unlike an optical lens, maximum 'bending' occurs closest to, and minimum 'bending' furthest from, the center of a gravitational lens. Consequently, a gravitational lens has no single focal point, but a focal line instead. If the (light) source, the massive lensing object, and the observer lie in a straight line, the original light source will appear as a ring around the massive lensing object. If there is any misalignment the observer will see an arc segment instead. This phenomenon was first mentioned in 1924 by the St. Petersburg physicist Orest Chwolson, and quantified by Albert Einstein in 1936. It is usually referred to in the literature as an Einstein ring, since Chwolson did not concern himself with the flux or radius of the ring image. More commonly, where the lensing mass is complex (such as a galaxy group or cluster) and does not cause a spherical distortion of space–time, the source will resemble partial arcs scattered around the lens. The observer may then see multiple distorted images of the same source; the number and shape of these depending upon the relative positions of the source, lens, and observer, and the shape of the gravitational well of the lensing object. There are three classes of gravitational lensing: 1. Strong lensing: where there are easily visible distortions such as the formation of Einstein rings, arcs, and multiple images. 2. Weak lensing: where the distortions of background sources are much smaller and can only be detected by analyzing large numbers of sources to find coherent distortions of only a few percent. The lensing shows up statistically as a preferred stretching of the background objects perpendicular to the direction to the center of the lens. By measuring the shapes and orientations of large numbers of distant galaxies, their orientations can be averaged to measure the shear of the lensing field in any region. This, in turn, can be used to reconstruct the mass distribution in the area: in particular, the background distribution of dark matter can be reconstructed. Since galaxies are intrinsically elliptical and the weak gravitational lensing signal is small, a very large number of galaxies must be used in these surveys. These weak lensing surveys must carefully avoid a number of important sources of systematic error: the intrinsic shape of galaxies, the tendency of a camera's point spread function to distort the shape of a galaxy and the tendency of atmospheric seeing to distort images must be understood and carefully accounted for. The results of these surveys are important for cosmological parameter estimation, to better understand and improve upon the Lambda-CDM model, and to provide a consistency check on other cosmological observations. They may also provide an important future constraint on dark energy. 3. Microlensing: where no distortion in shape can be seen but the amount of light received from a background object changes in time. The lensing object may be stars in the Milky Way in one typical case, with the background source being stars in a remote galaxy, or, in another case, an even more distant quasar. The effect is small, such that (in the case of strong lensing) even a galaxy with a mass more than 100 billion times that of the Sun will produce multiple images separated by only a few arcseconds. Galaxy clusters can produce separations of several arcminutes. In both cases the galaxies and sources are quite distant, many hundreds of megaparsecs away from our Galaxy. Gravitational lenses act equally on all kinds of electromagnetic radiation, not just visible light. Weak lensing effects are being studied for the cosmic microwave background as well as galaxy surveys. Strong lenses have been observed in radio and x-ray regimes as well. If a strong lens produces multiple images, there will be a relative time delay between two paths: that is, in one image the lensed object will be observed before the other image. Henry Cavendish in 1784 (in an unpublished manuscript) and Johann Georg von Soldner in 1801 (published in 1804) had pointed out that Newtonian gravity predicts that starlight will bend around a massive object as had already been supposed by Isaac Newton in 1704 in his famous Queries No.1 in his book Opticks. The same value as Soldner's was calculated by Einstein in 1911 based on the equivalence principle alone. However, Einstein noted in 1915 in the process of completing general relativity, that his (and thus Soldner's) 1911-result is only half of the correct value. Einstein became the first to calculate the correct value for light bending. The first observation of light deflection was performed by noting the change in position of stars as they passed near the Sun on the celestial sphere. The observations were performed in May 1919 by Arthur Eddington and his collaborators during a total solar eclipse, so that the stars near the Sun could be observed. Observations were made simultaneously in the cities of Sobral, Ceará, Brazil and in São Tomé and Príncipe on the west coast of Africa. The result was considered spectacular news and made the front page of most major newspapers. It made Einstein and his theory of general relativity world-famous. When asked by his assistant what his reaction would have been if general relativity had not been confirmed by Eddington and Dyson in 1919, Einstein famously made the quip: "Then I would feel sorry for the dear Lord. The theory is correct anyway." Spacetime around a massive object (such as a galaxy cluster or a black hole) is curved, and as a result light rays from a background source (such as a galaxy) propagating through spacetime are bent. The lensing effect can magnify and distort the image of the background source. According to general relativity, mass "warps" space–time to create gravitational fields and therefore bend light as a result. This theory was confirmed in 1919 during a solar eclipse, when Arthur Eddington and Frank Watson Dyson observed the light from stars passing close to the Sun was slightly bent, so that stars appeared slightly out of position. Einstein realized that it was also possible for astronomical objects to bend light, and that under the correct conditions, one would observe multiple images of a single source, called a gravitational lens or sometimes a gravitational mirage. However, as he only considered gravitational lensing by single stars, he concluded that the phenomenon would most likely remain unobserved for the foreseeable future. In 1937, Fritz Zwicky first considered the case where a galaxy (which he called 'nebulae' at that time) could act as a source, something that according to his calculations should be well within the reach of observations. It was not until 1979 that the first gravitational lens would be discovered. It became known as the "Twin QSO" since it initially looked like two identical quasistellar objects; it is officially named SBS 0957+561. This gravitational lens was discovered by Dennis Walsh, Bob Carswell, and Ray Weymann using the Kitt Peak National Observatory 2.1 meter telescope. In the 1980s, astronomers realized that the combination of CCD imagers and computers would allow the brightness of millions of stars to be measured each night. In a dense field, such as the galactic center or the Magellanic clouds, many microlensing events per year could potentially be found. This led to efforts such as Optical Gravitational Lensing Experiment, or OGLE, that have characterized hundreds of such events. Explanation in terms of space–time curvature In general relativity, light follows the curvature of spacetime, hence when light passes around a massive object, it is bent. This means that the light from an object on the other side will be bent towards an observer's eye, just like an ordinary lens. Since light always moves at a constant speed, lensing changes the direction of the velocity of the light, but not the magnitude. Light rays are the boundary between the future, the spacelike, and the past regions. The gravitational attraction can be viewed as the motion of undisturbed objects in a background curved geometry or alternatively as the response of objects to a force in a flat geometry. The angle of deflection is: toward the mass M at a distance r from the affected radiation, where G is the universal constant of gravitation and c is the speed of light in a vacuum. Since the Schwarzschild radius is defined as , this can also be expressed in simple form as Search for gravitational lenses Most of the gravitational lenses in the past have been discovered accidentally. A search for gravitational lenses in the northern hemisphere (Cosmic Lens All Sky Survey, CLASS), done in radio frequencies using the Very Large Array (VLA) in New Mexico, led to the discovery of 22 new lensing systems, a major milestone. This has opened a whole new avenue for research ranging from finding very distant objects to finding values for cosmological parameters so we can understand the universe better. A similar search in the southern hemisphere would be a very good step towards complementing the northern hemisphere search as well as obtaining other objectives for study. If such a search is done using well-calibrated and well-parameterized instrument and data, a result similar to the northern survey can be expected. The use of the Australia Telescope 20 GHz (AT20G) Survey data collected using the Australia Telescope Compact Array (ATCA) stands to be such a collection of data. As the data were collected using the same instrument maintaining a very stringent quality of data we should expect to obtain good results from the search. The AT20G survey is a blind survey at 20 GHz frequency in the radio domain of the electromagnetic spectrum. Due to the high frequency used, the chances of finding gravitational lenses increases as the relative number of compact core objects (e.g. Quasars) are higher (Sadler et al. 2006). This is important as the lensing is easier to detect and identify in simple objects compared to objects with complexity in them. This search involves the use of interferometric methods to identify candidates and follow them up at higher resolution to identify them. Full detail of the project is currently under works for publication. In a 2009 article on Science Daily a team of scientists led by a cosmologist from the U.S. Department of Energy's Lawrence Berkeley National Laboratory has made major progress in extending the use of gravitational lensing to the study of much older and smaller structures than was previously possible by stating that weak gravitational lensing improves measurements of distant galaxies. Astronomers from the Max Planck Institute for Astronomy in Heidelberg, Germany, the results of which are accepted for publication on Oct 21, 2013 in the Astrophysical Journal Letters (arXiv.org), discovered what at the time was the most distant gravitational lens galaxy termed as J1000+0221 using NASA’s Hubble Space Telescope. While it remains the most distant quad-image lensing galaxy known, an even more distant two-image lensing galaxy was subsequently discovered by an international team of astronomers using a combination of Hubble Space Telescope and Keck telescope imaging and spectroscopy. The discovery and analysis of the IRC 0218 lens was published in the Astrophysical Journal Letters on June 23, 2014. A research published Sep 30, 2013 in the online edition of Physical Review Letters, led by McGill University in Montreal, Québec, Canada, has discovered the B-modes, that are formed due to gravitational lensing effect, using National Science Foundation's South Pole Telescope and with help from the Herschel space observatory. This discovery would open the possibilities of testing the theories of how our universe originated. Solar gravitational lens Albert Einstein predicted in 1936 that rays of light from the same direction that skirt the edges of the Sun would converge to a focal point approximately 542 AU from the Sun. Thus, the Sun could act as a gravitational lens for magnifying distant objects in a way that provides some flexibility in aiming unlike the coincidence-based lens usage of more distant objects, such as intermediate galaxies. A probe's location could shift around as needed to select different targets relative to the Sun (acting as a lens). This distance is far beyond the progress and equipment capabilities of space probes such as Voyager 1, and beyond the known planets and dwarf planets, though over thousands of years 90377 Sedna will move further away on its highly elliptical orbit. The high gain for potentially detecting signals through this lens, such as microwaves at the 21-cm hydrogen line, led to the suggestion by Frank Drake in the early days of SETI that a probe could be sent to this distance. A multipurpose probe SETISAIL and later FOCAL was proposed to the ESA in 1993, but is expected to be a difficult task. If a probe does pass 542 AU, the gain and image-forming capabilities of the lens will continue to improve at further distances as the rays that come to a focus at these distances pass further away from the distortions of the Sun's corona. Measuring weak lensing Kaiser et al. (1995), Luppino & Kaiser (1997) and Hoekstra et al. (1998) prescribed a method to invert the effects of the Point Spread Function (PSF) smearing and shearing, recovering a shear estimator uncontaminated by the systematic distortion of the PSF. This method (KSB+) is the most widely used method in current weak lensing shear measurements. Galaxies have random rotations and inclinations. As a result, the shear effects in weak lensing need to be determined by statistically preferred orientations. The primary source of error in lensing measurement is due to the convolution of the PSF with the lensed image. The KSB method measures the ellipticity of a galaxy image. The shear is proportional to the ellipticity. The objects in lensed images are parameterized according to their weighted quadrupole moments. For a perfect ellipse, the weighted quadrupole moments are related to the weighted ellipticity. KSB calculate how a weighted ellipticity measure is related to the shear and use the same formalism to remove the effects of the PSF. KSB’s primary advantages are its mathematical ease and relatively simple implementation. However, KSB is based on a key assumption that the PSF is circular with an anisotropic distortion. It’s fine for current cosmic shear surveys, but the next generation of surveys (e.g. LSST) may need much better accuracy than KSB can provide. Because during that time, the statistical errors from the data are negligible, the systematic errors will dominate. Historical papers and references - Chwolson, O (1924). "Über eine mögliche Form fiktiver Doppelsterne". Astronomische Nachrichten 221 (20): 329–330. Bibcode:1924AN....221..329C. doi:10.1002/asna.19242212003. - Einstein, Albert (1936). "Lens-like Action of a Star by the Deviation of Light in the Gravitational Field". Science 84 (2188): 506–7. Bibcode:1936Sci....84..506E. doi:10.1126/science.84.2188.506. JSTOR 1663250. PMID 17769014. - Renn, Jürgen; Tilman Sauer; John Stachel (1997). "The Origin of Gravitational Lensing: A Postscript to Einstein's 1936 Science paper". Science 275 (5297): 184–6. Bibcode:1997Sci...275..184R. doi:10.1126/science.275.5297.184. PMID 8985006. - Drakeford, Jason; Corum, Jonathan; Overbye, Dennis (March 5, 2015). "Einstein’s Telescope - video (02:32)". New York Times. Retrieved December 27, 2015. - Overbye, Dennis (March 5, 2015). "Astronomers Observe Supernova and Find They’re Watching Reruns". New York Times. Retrieved March 5, 2015. - Cf. Kennefick 2005 for the classic early measurements by the Eddington expeditions; for an overview of more recent measurements, see Ohanian & Ruffini 1994, ch. 4.3. For the most precise direct modern observations using quasars, cf. Shapiro et al. 2004 - Gravity Lens – Part 2 (Great Moments in Science, ABS Science) - Dieter Brill, "Black Hole Horizons and How They Begin", Astronomical Review (2012); Online Article, cited Sept.2012. - Melia, Fulvio (2007). The Galactic Supermassive Black Hole. Princeton University Press. pp. 255–256. ISBN 0-691-13129-5. - Soldner, J. G. V. (1804). "On the deflection of a light ray from its rectilinear motion, by the attraction of a celestial body at which it nearly passes by". Berliner Astronomisches Jahrbuch: 161–172. - Newton, Isaac (1998). Opticks: or, a treatise of the reflexions, refractions, inflexions and colours of light. Also two treatises of the species and magnitude of curvilinear figures. Commentary by Nicholas Humez (Octavo ed.). Palo Alto, Calif.: Octavo. ISBN 1-891788-04-3. (Opticks was originally published in 1704). - Will, C.M. (2006). "The Confrontation between General Relativity and Experiment". Living Rev. Relativity 9: 39. arXiv:gr-qc/0510072. Bibcode:2006LRR.....9....3W. doi:10.12942/lrr-2006-3. - Dyson, F. W.; Eddington, A. S.; Davidson C. (1920). "A determination of the deflection of light by the Sun's gravitational field, from observations made at the total eclipse of 29 May 1919". Philosophical Transactions of the Royal Society 220A: 291–333. Bibcode:1920RSPTA.220..291D. doi:10.1098/rsta.1920.0009. - Stanley, Matthew (2003). "'An Expedition to Heal the Wounds of War': The 1919 Eclipse and Eddington as Quaker Adventurer". Isis 94 (1): 57–89. doi:10.1086/376099. PMID 12725104. - Rosenthal-Schneider, Ilse: Reality and Scientific Truth. Detroit: Wayne State University Press, 1980. p 74. (See also Calaprice, Alice: The New Quotable Einstein. Princeton: Princeton University Press, 2005. p 227.) - Dyson, F. W.; Eddington, A. S.; Davidson, C. (1 January 1920). "A Determination of the Deflection of Light by the Sun's Gravitational Field, from Observations Made at the Total Eclipse of May 29, 1919". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 220 (571-581): 291–333. Bibcode:1920RSPTA.220..291D. doi:10.1098/rsta.1920.0009. - F. Zwicky (1937). "Nebulae as Gravitational lenses" (PDF). Physical Review 51 (4): 290. doi:10.1103/PhysRev.51.290. - Walsh, D.; Carswell, R. F.; Weymann, R. J. (31 May 1979). "0957 + 561 A, B: twin quasistellar objects or gravitational lens?". Nature 279 (5712): 381–384. Bibcode:1979Natur.279..381W. doi:10.1038/279381a0. PMID 16068158. - Cosmology: Weak gravitational lensing improves measurements of distant galaxies - Sci-News.com (21 Oct 2013). "Most Distant Gravitational Lens Discovered". Sci-News.com. Retrieved 22 October 2013. - van der Wel, A.; et al. (2013). "Discovery of a Quadruple Lens in CANDELS with a Record Lens Redshift". ApJ Letters 777: L17. arXiv:1309.2826. Bibcode:2013ApJ...777L..17V. doi:10.1088/2041-8205/777/1/L17. - Wong, K.; et al. (2014). "Discovery of a Strong Lensing Galaxy Embedded in a Cluster at z = 1.62". ApJ Letters 789: L31. arXiv:1405.3661. Bibcode:2014ApJ...789L..31W. doi:10.1088/2041-8205/789/2/L31. - NASA/Jet Propulsion Laboratory (October 22, 2013). "Long-sought pattern of ancient light detected". ScienceDaily. Retrieved October 23, 2013. - Hanson, D.; et al. (Sep 30, 2013). "Detection of B-Mode Polarization in the Cosmic Microwave Background with Data from the South Pole Telescope". Physical Review Letters. 14 111. arXiv:1307.5830. Bibcode:2013PhRvL.111n1301H. doi:10.1103/PhysRevLett.111.141301. - Clavin, Whitney; Jenkins, Ann; Villard, Ray (7 January 2014). "NASA's Hubble and Spitzer Team up to Probe Faraway Galaxies". NASA. Retrieved 8 January 2014. - Chou, Felecia; Weaver, Donna (16 October 2014). "RELEASE 14-283 - NASA’s Hubble Finds Extremely Distant Galaxy through Cosmic Magnifying Glass". NASA. Retrieved 17 October 2014. - "Lens-Like Action of a Star by the Deviation of Light in the Gravitational Field". Science 84 (2188): 506–507. 1936. Bibcode:1936Sci....84..506E. doi:10.1126/science.84.2188.506. PMID 17769014. - Claudio Maccone (2009). Deep Space Flight and Communications: Exploiting the Sun as a Gravitational Lens. Springer. - Kaiser, Nick; Squires, Gordon; Broadhurst, Tom (August 1995). "A Method for Weak Lensing Observations". The Astrophysical Journal 449: 460. arXiv:astro-ph/9411005. Bibcode:1995ApJ...449..460K. doi:10.1086/176071. - Luppino, G. A.; Kaiser, Nick (20 January 1997). "Detection of Weak Lensing by a Cluster of Galaxies at = 0.83". The Astrophysical Journal 475 (1): 20–28. arXiv:astro-ph/9601194. Bibcode:1997ApJ...475...20L. doi:10.1086/303508. - Loff, Sarah; Dunbar, Brian (February 10, 2015). "Hubble Sees A Smiling Lens". NASA. Retrieved February 10, 2015. - "Most distant gravitational lens helps weigh galaxies". ESA/Hubble Press Release. Retrieved 18 October 2013. - "ALMA Rewrites History of Universe's Stellar Baby Boom". ESO. Retrieved 2 April 2013. - "Accidental Astrophysicists". Science News, June 13, 2008. - "XFGLenses". A Computer Program to visualize Gravitational Lenses, Francisco Frutos-Alfaro - "G-LenS". A Point Mass Gravitational Lens Simulation, Mark Boughen. - Newbury, Pete, "Gravitational Lensing". Institute of Applied Mathematics, The University of British Columbia. - Cohen, N., "Gravity's Lens: Views of the New Cosmology", Wiley and Sons, 1988. - "Q0957+561 Gravitational Lens". Harvard.edu. - "Gravitational lensing". Gsfc.nasa.gov. - Bridges, Andrew, "Most distant known object in universe discovered". Associated Press. February 15, 2004. (Farthest galaxy found by gravitational lensing, using Abell 2218 and Hubble Space Telescope.) - Analyzing Corporations ... and the Cosmos An unusual career path in gravitational lensing. - "HST images of strong gravitational lenses". Harvard-Smithsonian Center for Astrophysics. - "A planetary microlensing event" and "A Jovian-mass Planet in Microlensing Event OGLE-2005-BLG-071", the first extra-solar planet detections using microlensing. - Gravitational lensing on arxiv.org - NRAO CLASS home page - AT20G survey - A diffraction limit on the gravitational lens effect (Bontz, R. J. and Haugan, M. P. "Astrophysics and Space Science" vol. 78, no. 1, p. 199-210. August 1981) - Further reading - Blandford & Narayan; Narayan, R (1992). "Cosmological applications of gravitational lensing". ARA&A 30 (1): 311–358. Bibcode:1992ARA&A..30..311B. doi:10.1146/annurev.aa.30.090192.001523. - Matthias Bartelmann and Peter Schneider (2000-08-17). "Weak Gravitational Lensing" (PDF). - Khavinson, Dmitry; Neumann, Genevra (June–July 2008). "From Fundamental Theorem of Algebra to Astrophysics: A "Harmonious" Path" (PDF). Notices (AMS) 55 (6): 666–675.. - Petters, Arlie O.; Levine, Harold; Wambsganss, Joachim (2001). Singularity Theory and Gravitational Lensing. Progress in Mathematical Physics 21. Birkhäuser. - Tools for the evaluation of the possibilities of using parallax measurements of gravitationally lensed sources (Stein Vidar Hagfors Haugan. June 2008) |Wikimedia Commons has media related to Gravitational lensing.| - Video: Evalyn Gates – Einstein's Telescope: The Search for Dark Matter and Dark Energy in the Universe, presentation in Portland, Oregon, on April 19, 2009, from the author's recent book tour. - Audio: Fraser Cain and Dr. Pamela Gay – Astronomy Cast: Gravitational Lensing, May 2007 Featured in science-fiction works
https://en.wikipedia.org/wiki/Gravitational_lensing
4.03125
The Plateau area extended from above the Canadian border through the plateau and mountain area of the Rocky Mts. to the Southwest and included much of California. Typical tribes were the Spokan, the Paiute, the Nez Percé, and the Shoshone. This was an area of great linguistic diversity. Because of the inhospitable environment the cultural development was generally low. The Native Americans in the Central Valley of California and on the California coast, notably the Pomo, were sedentary peoples who gathered edible plants, roots, and fruit and also hunted small game. Their acorn bread, made by pounding acorns into meal and then leaching it with hot water, was distinctive, and they cooked in baskets filled with water and heated by hot stones. Living in brush shelters or more substantial lean-tos, they had partly buried earth lodges for ceremonies and ritual sweat baths. Basketry, coiled and twined, was highly developed. To the north, between the Cascade Range and the Rocky Mts., the social, political, and religious systems were simple, and art was nonexistent. The Native Americans there underwent (c.1730) a great cultural change when they obtained from the Plains Indians the horse, the tepee, a form of the sun dance, and deerskin clothes. They continued, however, to fish for salmon with nets and spears and to gather camas bulbs. They also gathered ants and other insects and hunted small game and, in later times, buffalo. Their permanent winter villages on waterways had semisubterranean lodges with conical roofs; a few Native Americans lived in bark-covered long houses. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
http://www.factmonster.com/encyclopedia/society/natives-north-american-the-plateau-area.html
4.15625
Heliotropism, a form of tropism, is the diurnal motion or seasonal motion of plant parts (flowers or leaves) in response to the direction of the sun. The habit of some plants to move in the direction of the sun was already known by the Ancient Greeks. They named one of those plants after that property Heliotropium, meaning sun turn. The Greeks assumed it to be a passive effect, presumably the loss of fluid on the illuminated side, that did not need further study. Aristotle's logic that plants are passive and immobile organisms prevailed. In the 19th century, however, botanists discovered that growth processes in the plant were involved, and conducted increasingly ingenious experiments. A. P. de Candolle called this phenomenon in any plant heliotropism (1832). It was renamed phototropism in 1892, because it is a response to light rather than to the sun, and because the phototropism of algae in lab studies at that time strongly depended on the brightness (positive phototropic for weak light, and negative phototropic for bright light, like sunlight). A botanist studying this subject in the lab, at the cellular and subcellular level, or using artificial light, is more likely to employ the more abstract word phototropism. The French scientist Jean-Jacques d'Ortous de Mairan was one of the first to study heliotropism when he experimented with the Mimosa pudica plant. Heliotropic flowers track the sun's motion across the sky from east to west. During the night, the flowers may assume a random orientation, while at dawn they turn again toward the east where the sun rises. The motion is performed by motor cells in a flexible segment just below the flower, called a pulvinus. The motor cells are specialized in pumping potassium ions into nearby tissues, changing their turgor pressure. The segment flexes because the motor cells at the shadow side elongate due to a turgor rise. Heliotropism is a response to light from the sun. Several hypotheses have been proposed for the occurrence of heliotropism in flowers: - The pollinator attraction hypothesis holds that the warmth associated with full insolation of the flower is a direct reward for pollinators. - The growth promotion hypothesis assumes that effective absorption of solar energy and the consequent rise in temperature has a favourable effect on pollen germination, growth of the pollen tube and seed production. - The cooling hypothesis, appropriate to flowers in hot climates, assumes that the position of flowers is adjusted to avoid overheating. Some solar tracking plants are not purely heliotropic: in those plants the change of orientation is an innate circadian motion triggered by light, which continues for one or more periods if the light cycle is interrupted. Tropical convolvulaceous flowers show a preferred orientation, pointing in the general direction of the sun but not exactly tracking the sun. They demonstrated no diurnal heliotropism but strong seasonal heliotropism. If solar tracking is exact, the sun’s rays would always enter the corolla tube and warm the gynoecium, a process which could be dangerous in a tropical climate. However, by adopting a certain angle away from the solar angle, this is prevented. The trumpet shape of these flowers thus acts as a parasol shading the gynoecium at times of maximum solar radiation, and not allowing the rays to impinge on the gynoecium. In case of sunflower, a common misconception is that sunflower heads track the Sun across the sky. The uniform alignment of the flowers does result from heliotropism in an earlier development stage, the bud stage, before the appearance of flower heads. The buds are heliotropic until the end of the bud stage, and finally face east. The flower of the sunflower preserves the final orientation of the bud, thus keeping the mature flower facing east. Leaf heliotropism is the solar tracking behavior of plant leaves. Some plant species have leaves that orient themselves perpendicularly to the sun's rays in the morning (diaheliotropism), and others have those that orient themselves parallel to these rays at midday (paraheliotropism). Floral heliotropism is not necessarily exhibited by the same plants that exhibit leaf heliotropism. - Whippo, Craig W. (2006). "Phototropism: Bending towards Enlightenment". The Plant Cell 18 (5): 1110–1119. doi:10.1105/tpc.105.039669. PMC 1456868. PMID 16670442. Retrieved 2012-08-08. - Hart, J.W. (1990). Plant Tropisms: And other Growth Movements. Springer. p. 36. Retrieved 2012-08-08. - "Phototropism and photomorphogenesis of Vaucheria". - Donat-Peter Häder,Michael Lebert (2001). Photomovement. Elsevier. p. 676. Retrieved 2012-08-08. - Hocking B., Sharplin D. (1965). "Flower basking by arctic insects" (PDF). Nature 206 (4980): 206–215. doi:10.1038/206215b0. - Kevan, P.G. (1975). "Sun-tracking solar furnaces in high arctic flowers: significance for pollination and insects.". Science 189 (4204): 723–726. doi:10.1126/science.189.4204.723. - Lang A.R.G., Begg J.E. (1979). "Movements of Helianthus annuus leaves and heads". J Appl Ecol 16: 299–305. doi:10.2307/2402749. line feed character in |title=at position 31 (help) - Kudo, G. (1995). "Ecological Significance of Flower Heliotropism in the Spring Ephemeral Adonis ramosa (Ranunculaceae)". Oikos 72 (1): 14–20. doi:10.2307/3546032. - Patiño, S.; Jeffree, C.; Grace, J. (2002). "The ecological role of orientation in tropical convolvulaceous flowers" (PDF). Oecologia 130: 373–379. doi:10.1007/s00442-001-0824-1. - officially replaced by diaphototropism and paraphototropism - Animation of Heliotropic Leaf Movements in Plants - 24-hour heliotropism of Arctic poppy exposed to midnight sun
https://en.wikipedia.org/wiki/Heliotropism
4.1875
The word levée (from French, noun use of infinitive lever, "rising", from Latin levāre, "to raise") originated in the Levée du Soleil (Rising of the Sun) of King Louis XIV (1643–1715). It was his custom to receive his male subjects in his bedchamber just after arising, a practice that subsequently spread throughout Europe. In the 18th century the levée in Great Britain and Ireland became a formal court reception given by the sovereign or his/her representative in the forenoon or early afternoon. In the New World colonies the levée was held by the governor acting on behalf of the monarch. Only men were received at these events. It was in Canada that the levée became associated with New Year's Day. The fur traders had the tradition of paying their respects to the master of the fort (their government representative) on New Year's Day. This custom was adopted by the governor general and lieutenant governors for their levées. The first recorded levée in Canada was held on January 1, 1646, in the Chateau St. Louis by Charles Huault de Montmagny, Governor of New France from 1636 to 1648. In addition to wishing a happy new year to the citizens the governor informed guests of significant events in France as well as the state of affairs within the colony. In turn, the settlers were expected to renew their pledges of allegiance to the Crown. The levée tradition was continued by British colonial governors in Canada and subsequently by both the governor general and lieutenant governors. It continues to the present day. Over the years the levée has become almost solely a Canadian observance. Today, levées are the receptions (usually, but not necessarily, on New Year's Day) held by the governor general, the lieutenant governors of the provinces, the military and others, to mark the start of another year and to provide an opportunity for the public to pay their respects. Today the levée has evolved from the earlier, more boisterous party into a more sedate and informal one. It is an occasion to call upon representatives of the monarch, military and municipal governments and to exchange New Year's greetings and best wishes for the new year, to renew old acquaintances and to meet new friends. It is also an opportunity to reflect upon the events of the past year and to welcome the opportunities of the New Year. The province of Prince Edward Island maintains a more historical approach to celebrating levée day. On New Year's Day, all Legions and bars are opened and offer moosemilk (egg nog and rum) from the early morning until the late night. Though there are still the formal receptions held at Government House and Province House, levée day is not only a formal event. It is something that attracts a large number of Islanders, which is quite unusual in comparison to the other provinces where it has gradually become more subdued. Prince Edward Island levées begin at 9 a.m. The historic town of Niagara-on-the-Lake (the first capital of Upper Canada) holds a levée complete with firing of a cannon at Navy Hall (a historic building close to Fort George) The levée is well attended by townspeople and visitors. Toasts are made to the Queen, "our beloved Canada", the Canadian Armed Forces, veterans, "our fallen comrades", as well as "our American friends and neighbours" (this final toast would not have been made two centuries ago, when the town was founded). Greetings are brought from all levels of government and it is a great community event. Some religious leaders, such as the Bishop of the Anglican Diocese of Ontario, hold a Levée on New Year's Day. As has the levée itself, refreshments served at levées have undergone changes (both in importance and variety) over the years. In colonial times, when the formalities of the levée had been completed, guests were treated to wine and cheeses from the homeland. Wines did not travel well during the long ocean voyage to Canada. To make the cloudy and somewhat sour wine more palatable it was heated with alcohol and spices. The concoction came to be known as le sang du caribou ("reindeer blood"). Under British colonial rule the wine in le sang du caribou was replaced with whisky (which travelled better). This was then mixed with goat's milk and flavoured with nutmeg and cinnamon to produce an Anglicized version called "moose milk". Today's versions of moose milk, in addition to whisky (or rum) and spices may use a combination of eggnog and ice cream, as well as other alcoholic supplements. The exact recipes used by specific groups may be jealously guarded secrets. q.v. External links. Refreshments were clearly an important element in the New Year's festivities. A report of the New Year's levée held in Brandon House in Manitoba in 1797 indicated that "... in the morning the Canadians (men of the North West Company) make the House and Yard ring with saluting (the firing of rifles). The House then filled with them when they all got a dram each." Simpson's Athabasca Journal reports that on January 1, 1821, "the Festivities of the New Year commenced at four o'clock this morning when the people honoured me with a salute of fire arms, and in half an hour afterwards the whole Inmates of our Garrison assembled in the hall dressed out in their best clothes, and were regaled in a suitable manner with a few flaggon's Rum and some Cakes. A full allowance of Buffaloe meat was served out to them and a pint of spirits for each man." When residents called upon the governor to pay their respects they expected a party. In 1856 on Vancouver Island, there was "an almighty row" when the colonial governor's levée was not to the attendees' liking. Municipalities with levées - Ajijic, Jalisco, Mexico - Almonte, Ontario - Bracebridge, Ontario - Brampton, Ontario - Brantford, Ontario - Brockville, Ontario - Cape Breton Regional Municipality, Nova Scotia - Cambridge, Ontario - Cobourg, Ontario - Charlottetown, Prince Edward Island - Grand Manan, New Brunswick - Edmonton, Alberta - Elliot Lake, Ontario - Esquimalt, British Columbia - Guelph, Ontario - Halifax, Nova Scotia - Hamilton, Ontario - Kingston, Ontario - Kitchener, Ontario - Langford, British Columbia - London, Ontario - Medicine Hat, Alberta - Milton, Ontario - Mississauga, Ontario - Moncton, New Brunswick - Niagara-on-the-Lake, Ontario - North Saanich, British Columbia - Oak Bay, British Columbia - Oakville, Ontario - Orangeville, Ontario - Oshawa, Ontario - Owen Sound, Ontario - Parrsboro, Nova Scotia - Pictou, Nova Scotia - Picton, Ontario - Redwater, Alberta - Rivers, Manitoba - Riverview, New Brunswick - Saanich, British Columbia - Shelburne, Nova Scotia - Sioux Lookout, Ontario - St. Catharines, Ontario - Stellarton, Nova Scotia - Summerside, Prince Edward Island - Toronto, Ontario - Victoria, British Columbia - Windsor, Ontario - Winnipeg, Manitoba - Woodstock, New Brunswick - Yarmouth, Nova Scotia The levée has a long tradition in the Canadian Forces as one of the activities associated with New Year's Day. Military commanders garrisoned throughout Canada held local levées since, as commissioned officers, they were expected to act on behalf of the Crown on such occasions. On Vancouver Island (the base for the Royal Navy's Pacific Fleet), levées began in the 1840s. Today, members of the various Canadian Forces units and headquarters across Canada receive and greet visiting military and civilian guests on the first day of the new year. In military messes, refreshments take a variety of forms: moose milk (with rum often substituted for whisky); the special flaming punch of the Royal Canadian Hussars of Montreal; the Atholl Brose of the Seaforth Highlanders of Vancouver; "Little Black Devils", (Dark Rum and Creme de menthe) of the Royal Winnipeg Rifles. Members of Le Régiment de Hull use sabres to uncork bottles of champagne. - "levee". Dictionary.com. Dictionary.com Unabridged. Random House, Inc. Accessed January 10, 2013. - "Le Lieutenant Gouverneur de la province de Québec recevra les messieurs qui desirement lui faire visite samedi le 1er janvier 1916, de midi a 1 h. p.m., dans la chambre du Conseil Legislatif Hôtel du gouvernment." [The Lieutenant Governor of the Province of Quebec will receive gentlemen who wish to pay him a visit Saturday, January 1, from noon until 1 p.m., in the Legislative Council chamber of the Parliament building.]. L'Action Catholique. December 31, 1915. Retrieved June 3, 2012. - Legislative Assembly of Alberta:New Year's Levée - Rukavina, Peter: I Went to the Levée, 2004 - Canada's Navy: MARPAC - Maritime Forces Pacific - Town of Parrsboro, Nova Scotia Canada - "County levees planned to ring in a new year". Pictou Advocate. December 30, 2015. Retrieved 31 December 2015. - Town of Yarmouth, Nova Scotia
https://en.wikipedia.org/wiki/Lev%C3%A9e_(event)