score
float64 4
5.34
| text
stringlengths 256
572k
| url
stringlengths 15
373
|
---|---|---|
4.09375 | |This article does not cite any sources. (December 2009)|
Refers to the periods of time during which a planet's surface reflects different amounts of sunlight, revealing different portions of the planet's surface from the perspective of a given point in space.
The two inferior planets, Mercury and Venus, which have orbits that are smaller than the Earth's, exhibit the full range of phases as does the Moon, when seen through a telescope. Their phases are "full" when they are at superior conjunction, on the far side of the Sun as seen from the Earth. (It is possible to see them at these times, since their orbits are not exactly in the plane of Earth's orbit, so they usually appear to pass slightly above or below the Sun in the sky. Seeing them from the Earth's surface is difficult, because of sunlight scattered in Earth's atmosphere, but observers in space can see them easily if direct sunlight is blocked from reaching the observer's eyes.) The planets' phases are "new" when they are at inferior conjunction, passing more or less between the Sun and the Earth. (Sometimes they appear to cross the solar disk, which is called a transit of the planet.) At intermediate points on their orbits, these planets exhibit the full range of crescent and gibbous phases.
The superior planets, orbiting outside the Earth's orbit, do not exhibit the full range of phases as they appear almost always as gibbous or full. However, Mars often appears significantly gibbous, when it is illuminated by the Sun at a very different angle than it is seen by an observer on Earth, so an observer on Mars would see the Sun and the Earth widely separated in the sky. This effect is not easily noticeable for the giant planets, from Jupiter outward, since they are so far away that the Sun and the Earth, as seen from these outer planets, would appear to be in almost the same direction.
- One Schaaf, Fred. The 50 Best Sights in Astronomy and How to See Them: Observing Eclipses, Bright Comets, Meteor Showers, and Other Celestial Wonders. Hoboken, NJ: John Wiley, 2007. Print.
- Two Ganguly, J. Thermodynamics in Earth and Planetary Sciences. Berlin: Springer, 2008. Print.
- Three Ford, Dominic. The Observer's Guide to Planetary Motion: Explaining the Cycles of the Night Sky. Springer: Dordrecht, 2014. Print. | https://en.wikipedia.org/wiki/Planetary_phase |
4.28125 | Decomposers are organisms that break down dead or decaying organisms, and in doing so they carry out the natural process of decomposition. Like herbivores and predators, decomposers are heterotrophic, meaning that they use organic substrates to get their energy, carbon and nutrients for growth and development. While the terms decomposer and detritivore are often interchangeably used, however, detritivores must digest dead matter via internal processes while decomposers can break down cells of other organisms using biochemical reactions without need for internal digestion. Thus, invertebrates such as earthworms, woodlice, and sea cucumbers are detritivores, not decomposers, in the technical sense, since they must ingest nutrients and are unable to absorb them externally.
Bacteria are important decomposers; they are widely distributed and can break down just about any type of organic matter. and the bacteria on Earth may form a biomass that exceeds that of all living plants and animals. Bacteria are vital in the recycling of nutrients, and many steps in nutrient cycles depend on these organisms.
The primary decomposers of litter in many ecosystems are fungi. Unlike bacteria, which are unicellular organisms, most saprotrophic fungi grow as a branching network of hyphae. While bacteria are restricted to growing and feeding on the exposed surfaces of organic matter, fungi can use their hyphae to penetrate larger pieces of organic matter. Additionally, only wood-decay fungi have evolved the enzymes necessary to decompose lignin, a chemically complex substance found in wood. These two factors make fungi the primary decomposers in forests, where litter has high concentrations of lignin and often occurs in large pieces. Fungi decompose organic matter by releasing enzymes to break down the decaying material, after which they absorb the nutrients in the decaying material. Hyphae used to break down matter and absorb nutrients are also used in reproduction. When two compatible fungi's hyphae grow close to each other, they will then fuse together for reproduction and form another fungus. | https://en.wikipedia.org/wiki/Decomposers |
4.0625 | Listen to today's episode of StarDate on the web the same day it airs in high-quality streaming audio without any extra ads or announcements. Choose a $8 one-month pass, or listen every day for a year for just $30.
You are here
Moon and Regulus
The surface of the Moon is a battered and barren landscape. Its main features are vast volcanic plains and rugged mountain ranges. But its most common features are impact craters — bowl-shaped structures that formed when space rocks slammed into the lunar surface at high speeds.
Actually, not all of the craters were formed by rocks. A few dozen were formed by blobs of metal — spacecraft or rocket stages launched by the United States or Soviet Union. Some of the impacts were accidental, formed by probes that were out of control. Most were intentional, though, with some serving as parts of scientific experiments.
Some of the booster rockets and lunar modules that carried Apollo astronauts to the Moon, for example, were rammed into the surface to create “moonquakes.” Instruments that the astronauts left on the surface measured the quakes, providing important information about the Moon’s structure and composition.
And several craft have been aimed at the lunar poles. The impacts blasted up plumes of dust that scientists examined for particles of ice. The most recent of these impacts confirmed that ice does exist near the south pole.
The craters gouged by these impacts are too small to see from Earth, although several have been photographed by a Moon-orbiting spacecraft. But you can see the Moon itself tonight. It rises in late evening, with the bright star Regulus, the heart of Leo, the lion, rising above it.
Script by Damond Benningfield, Copyright 2013 | https://stardate.org/radio/program/moon-and-regulus-16 |
4.28125 | How we define polynomial functions, and identify their leading coefficient and degree.
How to factor a trinomial with a leading coefficient of 1.
How to simplify expressions by distributing and/or combining like terms.
How to know when a negative coefficient is associated with power.
Simplify each side of an equation before solving with two variable terms
How to find the general term of a geometric sequence.
How to factor trinomials when the leading coefficient is not one.
How to find the general term of an arithmetic sequence.
How to understand the vocabulary of polynomials.
How we identify the behavior of a polynomial graph near an x-intercept.
Solving two step equations with integers
How to recognize and factor a binomial that is the difference of perfect squares.
How to set up a system of equations to represent word problems where someone is selling multiple items.
How to add or subtract polynomials.
How to use a recursion formula to represent the Fibonacci sequence.
How to describe and label point, line, and plane. How to define coplanar and collinear.
How to break up a logarithm of a large number. | https://www.brightstorm.com/tag/term-leading-coefficient/ |
4 | Personal hygiene can be a sensitive subject to bring up to a classroom of students or to your own children. It is important to instill good hygiene practices early on to prevent cavities, infections and other health problems. Your child or student must also feel safe discussing this topic with you, especially as they begin to go through puberty. Most teens must change their personal hygiene habits at this point. There are a number of ways to teach personal hygiene. In most cases, you must explain how germs work, develop a hygiene plan and make good hygiene fun. This article will explain how to teach personal hygiene.
Teaching Children Personal Hygiene
1Explain the concept of germs and bacteria. Parenting Magazine suggests you can do this with books, such as "Germs are not for Sharing" or "Germ Stories." You can also do a miniature science experiment where you show your child or classroom videos or microscope slides of typical bacteria found on the hands.
- You can find some videos on You tube. You can also visit the themayoclinic.com or cleaninginstitute.org to discover what hygiene recommendations are currently being made. They may have changed since you were a child, since the discovery of other bacteria.
- To actively demonstrate how we transfer germs, try the chalk experiment with your children. Have a box of chalk powder ready. Dip your hand in it. Shake hands with one child and ask the child to shake hands with other children. All of them have chalk powder on their hands, having come just from the initial dip! Explain saying that germs also spread in similar manner. This visual explanation may do more than any words to help you show the problem to your children.
2Teach children the 6 steps of hand-washing immediately after explaining these germs. You should wet your hands, apply soap, lather the soap, rub your hands for at least 20 seconds, rinse them and dry them. You can use your bathroom or a large school bathroom to do this activity.
- Teach children a 20 to 30 second song to sing to themselves while they wash their hands. A song such as "Happy Birthday" or "Twinkle Twinkle Little Star," can help them to scrub their hands clean for the allotted time. Sing with them the first few times.
3Have the children or students list all the times it is necessary to wash your hands. Discuss daily bathing in connection with hand washing. Enumerate all the places germs like to hide and how best to clean them with soap and water.
- You can either tell the students where and how to wash, or you can adopt the Socratic method. You can ask students where they think germs may grow and how best to get rid of them. Encouraging casual conversation about hygiene will usually create a more comfortable environment.
4Create a dental hygiene lesson plan. The best way to do this is to ask a dentist to personally come and talk to your class about dental hygiene. You should hand out toothbrushes, toothpaste and dye tablets.
- You can also do this at home with a toothbrush, toothpaste, floss and dye tablets. These are available at most dentists' offices to encourage good brushing. Sometimes having them choose their own toothbrush will encourage them to brush their teeth. Kids often respond better when it's something they have a choice in.
- Ask the dentist to explain the germs found in the mouth and how they can harm you. The dentist should tell the students where they hide and tell them how to get rid of them with a twice per day flossing and brushing routine.
- Ask the children to take out their toothbrush and play a 3 minute song. This is the typical time that most dentists encourage people to brush. Ask the students to brush while the song plays and then spit in the sink.
- Ask them to chew on the dental tablets and rinse. Then, ask them to look in the mirror. The areas where plaque is still active in the mouth will be dyed blue or red, demonstrating how careful we must be when brushing.
- Repeat this activity at home if you do not think your child is brushing enough. Make brushing fun by brushing with them and playing a 3 minute song that they like.
5Create a lesson to repeat every flu season. Demonstrate how colds and bacteria are passed around and teach the children to cough into their arm, wash their hands and avoid sharing germs through communal food or supplies.
Teaching Personal Hygiene During Puberty
1Pay attention to the changes in your child's body and smells. As they go through puberty, they usually will begin to have a stronger body odor. Discuss this with your child in a private atmosphere as soon as you sense the change.
- Broaching the subject first will help your child to understand what they are going through. Puberty can include changes in mood, such as depression, and other children can be cruel if your child has a strong odor.
- You may need to explain that daily bathing is more important as people grow older because puberty causes body odor. Also, bacteria caused by locker rooms or sports performances requires more attention to showering.
2Buy your child's first deodorant for them. You can decide whether you want to include an antiperspirant as well. Tell them to use it every morning, usually after they shower, just as you do.
3Speak with daughters about whether they want to start shaving their legs or armpits. While this is also a family/personal decision, some daughters may be embarrassed if they have dark hair and their other friends are shaving. Demonstrate how you shave and buy a matching razor, or the razor that they like.
4Speak with your sons about starting to shave. You will need to demonstrate how to safely handle a razor. You may also need to explain that more facial hair will grow in time.
5Explain what a period is to a child by the time they are 8 or 9. Each girl should know what to expect when the time arrives. Have some feminine hygiene products on hand and explain how often they should be changed.
6Teach teenage hygiene in a classroom setting by explaining the anatomical changes a body goes through during puberty. This may be done in science class or at a separate time. Many schools choose to split the boys and girls apart when they explain puberty and the necessity of keeping up on personal hygiene.
Questions and Answers
Give us 3 minutes of knowledge!
- If your child is involved in sports, encourage them to shower after intense physical activity. Also, give them waterproof sandals to wear in communal showers. This can prevent athletes foot and the transfer of that bacteria from a locker room to the home.
- Ask your children to consult you if they are feeling ill. Many schools have policies that prevent students from attending class if they are sick with certain illnesses. Seek medical attention if you feel it is necessary, and wait until the child is feeling normal before returning them to school.
Things You'll Need
- Dental dye tablets
- 3-minute song
- 30-second song
- Hand soap
- Germ books
- Germ slides or videos
- Feminine pads and/or tampons
- Shower shoes
In other languages:
Thanks to all authors for creating a page that has been read 111,560 times. | http://www.wikihow.com/Teach-Personal-Hygiene |
4.25 | Bacteria encased in ice can be resuscitated after thousands, perhaps even millions of years. How these hardy bugs manage to survive deep freeze is something of a mystery. If nothing else, the low levels of radiation hitting Earth’s surface should cause any ice-bound bacterium’s DNA to break apart over time, eventually leading to irreparable damage. Some scientists think bacteria survive cryosleep by encasing their DNA in protective shells known as spores and entering a state of dormancy. Following spore formation, a bacterium can withstand harsh environmental conditions, including desiccation, strong acids, heat and UV radiation.
But other researchers think we aren’t giving enough credit to the ice dwellers. Recent studies have shown that some psycrhophiles– technical-speak for cold-loving bacteria – are able to maintain basic metabolic functions at subzero temperatures. Could psychrophiles trapped in ice be repairing their DNA faster than the UV radiation bombarding our planet pulls it apart? Microbiologist Markus Dieser at Lousiana State University was interested in finding out. In a study published in the journal Applied and Environmental Microbiology, Dieser and colleagues show for the first time that one bacteria- Psychrobacter arcticus– can repair it’s DNA at temperatures as low as -15ºC, or 5ºF. Moreover, it can do so 100,000 times faster than damage occurs.
P. articus is an innocuous little bacteria that is famous for one thing: it really likes the cold. It can grow and metabolize at -10 ºC, making it one of the most psychrophilic organisms on Earth. To investigate P. articus’’s ability to repair DNA in deep freeze, Dieser and colleagues isolated viable P. articus cells from Siberian permafrost that has been frozen for 20 to 30 thousand years. In the lab, the researchers dosed their cell cultures with a large pulse of ionizing radiation- roughly equal to what P. articus might experience over 225 thousand years of field exposure. By using such an intense burst of radiation, the team hoped to induce many “double-strand breaks”, or breaks that cause small DNA fragments to separate off from P. articus’s main chromosome.They incubated the irradiated cultures at -15ºC and monitored their survival over the course of 505 days.
Rather astoundingly, the scientists found no significant difference between the survival rates of irradiated and non-irradiated bacteria over the year and a half long study. While this finding alone suggests P. articus can repair its DNA at subzero temperatures, Dieser and colleagues wanted direct evidence. They used pulse-field electrophoresis, a technique which separates DNA fragments by size, to determine how may DNA double-strand breaks occurred after radiation exposure, and whether the DNA fragments reassembled themselves over time. Like Humpty Dumpty rebuilding himself, the scientists could literally watch P. articus reassemble its genome. On average, P. articus was able to patch thirteen double-strand DNA breaks over the course of the study- quite close to the roughly sixteen breaks inducted by radiation.
Not only can P. articus repair its DNA at subzero temperatures, it can do so really fast. Using annual radiation exposure data collected in the field, Dieser estimates that P. articus can repair double-strand breaks 100,000 times faster than they occur. The discovery has important implications for the survival of life in extreme environments, including cold extraterrestrial environments. For instance on the surface of Mars, where radiation levels are ~400 times greater than the Siberian permafrost, P. articus can still patch DNA breaks 280 times faster than they would accrue. As scientists continue exploring the “cold limit” to essential cellular functions such as DNA repair, they will continue to refine, and perhaps expand, our understanding of the fundamental boundaries for life.
Markus Dieser, John R. Battista, & Brent C. Christner (2013). DNA Double-Strand Break Repair at −15°C Applied and Environmental Microbiology DOI: 10.1128/AEM.02845-13 | http://lonelyspore.com/2014/02/03/frozen-bacteria-repair-their-dna-at-15oc/ |
4.3125 | The Box of Secrets
The Box of Secrets is a wonderful way of introducing a topic to children... or giving them a puzzle that they have to solve. Just find an old box and decorate it with our box labels (available for download below).
Here are some ways that you could use The Box of Secrets in your lessons:
- Use it as the starting point for a writing activity. Place the box somewhere prominent in your classroom and ask children to discuss what might be inside it. Where did it come from? Why is it in the classroom?
- Challenge your children to write a story that explains where it came from and what might happen if it is opened.
- Put something inside the box, linked to your current topic, and show one child what it is. Ask them to describe it to others without saying the name of it.
- Make a list of questions that might help you to identify what is inside the box.
- Choose an item to put inside the box and make a list of clues that will help others to identify what the object is.
- Use it to review children's understanding of 2D and 3D shapes in Maths. Put a shape inside it, along with clues that describe the shape. Pull out a clue and ask children to try and identify the shape that might be inside it.
- Put a historical artefact inside the box and use it to introduce a History topic to your pupils.
- Think about the different types of secrets that people have. Why do people sometimes keep secrets? Are secrets a good or a bad thing?
The resources are available in three different colour schemes (brown and gold, black and red, green and purple).
Do you have any other ideas for using this resource in your classroom? Share them by commenting below... | http://www.teachingideas.co.uk/planning/the-box-of-secrets |
4.15625 | You are hereHome ›
This is a set of four, one-page problems about the distance craft travel on Mars. Learners will use the Pythagorean Theorem to determine distance between a series of hypothetical exploration sites within Gale Crater on Mars. Options are presented so that students may learn about the Mars Science Laboratory (MSL) mission through a NASA press release or by viewing a NASA eClips™ video [6 min]. Common Core State Standards for Mathematics and English Language Arts are identified. This activity is part of the Space Math multi-media modules that integrate NASA press releases, NASA archival video, and mathematics problems targeted at specific math standards commonly encountered in middle school. | http://www.nasawavelength.org/resource/nw-000-000-003-916/ |
4.03125 | Melt pond "skylights" enable massive under-ice bloom in Arctic
For much of the winter, the land within the Arctic Circle receives little direct sunlight*, and most of the surface of the Arctic Ocean is capped by ice. Beneath the ice cover, phytoplankton—the microscopic, plant-like organisms that underpin the entire ocean food web—take a “long winter’s nap.”
When the Sun returns and the ice retreats, it’s like a cover being drawn off the roof of a greenhouse. Along the edge of the retreating ice cover, the surface water explodes with blooms of phytoplankton.
The pair of satellite images above shows a bloom in the Chukchi Sea northwest of Alaska on July 10, 2011. The top image is like a digital photo, showing swirls of sea ice along the crumbling edge of the consolidated ice pack. The bottom image is based on satellite observations of reflected light in wavelengths that are especially sensitive to the presence of chlorophyll. (Phytoplankton use chlorophyll for photosynthesis, just like land plants.)
North of Wrangel Island, the sea ice has a dingy look. This may be from sediment, but its distance from shore and the fact that it is fringed on both sides by waters with extremely high levels of chlorophyll suggest that the ice is being discolored by algae and other phytoplankton.
According to Arctic oceanographer Karen Frey, the discoloration is consistent with an unusual phenomenon that she encountered while on a research cruise in the Chukchi Sea in the first part of July: a massive bloom of phytoplankton stretching up to 100 kilometers (62 miles) under the ice pack.
Historically, the ice in this area has been thick enough even in spring to keep the waters below in darkness, Frey says. In the past decade, though, the ice conditions have changed dramatically, with a thinner ice cover that is laced with shallow ponds of meltwater. The ponds act like skylights, allowing light to filter through and support phytoplankton blooms.
The images above are like a photo and its negative, showing what the ice looks like from above (left, taken from the deck of the U.S. Coast Guard Cutter Healy) and below (right, captured by a waterproof HD camera lowered through a hole in the ice.) The pair of photos below shows the dramatic difference in water color and clarity during the massive under-ice bloom.
According to Frey, the presence of this bloom isn’t just a “geewhiz” phenomenon. Previous satellite-based estimates of Arctic phytoplankton productivity have generally assumed that nothing much is happening under the consolidated ice pack. Preliminary estimates of the size of this bloom and the area of sea ice around the rest of the Arctic that is in similar, melt-ponded condition each summer suggest that Arctic phytoplankton productivity could be ten times higher than previously estimated.
Read more about Arctic Ocean ecology in the Marine Ecosystems chapter of the 2012 Arctic Report Card.
Arrigo, K. R., Perovich, D. K., Pickart, R. S., Brown, Z. W., Dijken, G. L. van, Lowry, K. E., … Swift, J. H. (2012). Massive Phytoplankton Blooms Under Arctic Sea Ice. Science, 336(6087), 1408–1408. doi:10.1126/science.1215065
Frey, K. E., Perovich, D. K., & Light, B. (2011). The spatial distribution of solar radiation under a melting Arctic sea ice cover. Geophysical Research Letters, 38(22), L22501. doi:10.1029/2011GL049421
Satellite images by Jesse Allen, NASA Earth Observatory team, based on Aqua MODIS data provided by the GSFC Ocean Color team. Photos courtesy Karen Frey, Clark University.
*Updated Dec. 10, 2012. Previous version stated that “After the solstice, the Sun never rises on the land in the Arctic Circle,” but didn’t explain that the polar darkness lasts only a few days at that latitude before polar “twilight” returns. | https://www.climate.gov/news-features/features/melt-pond-skylights-enable-massive-under-ice-bloom-arctic |
4.15625 | The wattmeter is an instrument for measuring the electric power (or the supply rate of electrical energy) in watts of any given circuit. Electromagnetic wattmeters are used for measurement of utility frequency and audio frequency power; other types are required for radio frequency measurements.
The current coils are connected in series with the circuit, while the potential coil is connected in parallel. Also, on analog wattmeters, the potential coil carries a needle that moves over a scale to indicate the measurement. A current flowing through the current coil generates an electromagnetic field around the coil. The strength of this field is proportional to the line current and in phase with it. The potential coil has, as a general rule, a high-value resistor connected in series with it to reduce the current that flows through it.
For AC power, current and voltage may not be in phase, owing to the delaying effects of circuit inductance or capacitance. On an ac circuit the deflection is proportional to the average instantaneous product of voltage and current, thus measuring true power, P=VI cos φ. Here, cosφ represents the power factor which shows that the power transmitted may be less than the apparent power obtained by multiplying the readings of a voltmeter and ammeter in the same circuit.
The two circuits of a wattmeter can be damaged by excessive current. The ammeter and voltmeter are both vulnerable to overheating — in case of an overload, their pointers will be driven off scale — but in the wattmeter, either or even both the current and potential circuits can overheat without the pointer approaching the end of the scale. This is because the position of the pointer depends on the power factor, voltage and current. Thus, a circuit with a low power factor will give a low reading on the wattmeter, even when both of its circuits are loaded to the maximum safety limit. Therefore, a wattmeter is rated not only in watts, but also in volts and amperes.
A typical wattmeter in educational labs has two voltage coils (pressure coils) and a current coil. We can connect the two pressure coils in series or parallel to each other to change the ranges of the wattmeter. Another feature is that the pressure coil can also be tapped to change the meter's range. If the pressure coil has range of 300 volts, the half of it can be used so that the range becomes 150 volts.
|This section does not cite any sources. (February 2013)|
An early current meter was the electrodynamometer, the basic principles of which were laid out in an 1848 paper by German physicist Wilhelm Eduard Weber, later used in inventor Werner von Siemens' electrodynamometer in 1880. The Siemens electrodynamometer is a form of an electrodynamic ammeter, that has a fixed coil which is surrounded by another having its axis at right angles to that of the fixed coil. This second coil is suspended by a number of silk fibres, and to the coil is also attached a spiral spring the other end of which is fastened to a torsion head. If then the torsion head is twisted, the suspended coil experiences a torque and is displaced through an angle equal to that of the torsion head. The current can be passed into and out of the movable coil by permitting the ends of the coil to dip into two mercury cups.
If a current is passed through the fixed coil and movable coil in series with one another, the movable coil tends to displace itself so as to bring the axes of the coils, which are normally at right angles, more into the same direction. This tendency can be resisted by giving a twist to the torsion head and so applying to the movable coil through the spring a restoring torque, which opposes the torque due to the dynamic action of the currents. If then the torsion head is provided with an index needle, and also if the movable coil is provided with an indicating point, it is possible to measure the torsional angle through which the head must be twisted to bring the movable coil back to its zero position. In these circumstances, the torsional angle becomes a measure of the torque and therefore of the product of the strengths of the currents in the two coils, that is to say, of the square of the strength of the current passing through the two coils if they are joined up in series. The instrument can therefore be calibrated by passing through it known continuous currents, and it then becomes available for use with either continuous or alternating currents. The instrument can be provided with a calibration curve or table showing the current corresponding to each angular displacement of the torsion head.
Electronic wattmeters are used for direct, small power measurements or for power measurements at frequencies beyond the range of electrodynamometer-type instruments.
A modern digital electronic wattmeter/energy meter samples the voltage and current thousands of times a second. For each sample, the voltage is multiplied by the current at the same instant; the average over at least one cycle is the real power. The real power divided by the apparent volt-amperes (VA) is the power factor. A computer circuit uses the sampled values to calculate RMS voltage, RMS current, VA, power (watts), power factor, and kilowatt-hours. The readings may be displayed on the device, retained to provide a log and calculate averages, or transmitted to other equipment for further use. Wattmeters vary considerably in correctly calculating energy consumption, especially when real power is much lower than VA (highly reactive loads, e.g. electric motors). Simple meters may be calibrated to meet specified accuracy only for sinusoidal waveforms. Waveforms for switched-mode power supplies as used for much electronic equipment may be very far from sinusoidal, leading to unknown and possibly large errors at any power. This may not be specified in the meter's manual.
Precision and accuracy
There are limitations to measuring power with inexpensive wattmeters, or indeed with any meters not designed for low-power measurements. This particularly affects low power (e.g. under 10 watts), as used in standby; readings may be so inaccurate as to be useless (although they do confirm that standby power is low, rather than high). The difficulty is largely due to difficulty in accurate measurement of the alternating current, rather than voltage, and the relatively little need for low-power measurements. The specification for the meter should specify the reading error for different situations. For a typical plug-in meter the error in wattage is stated as ±5% of measured value ±10 W (e.g., a measured value of 100W may be wrong by 5% of 100 W plus 10 W, i.e., ±15 W, or 85–115 W); and the error in kW·h is stated as ±5% of measured value ±0.1 kW·h. If a laptop computer in sleep mode consumes 5 W, the meter may read anything from 0 to 15.25 W, without taking into account errors due to non-sinusoidal waveform. In practice accuracy can be improved by connecting a fixed load such as an incandescent light bulb, adding the device in standby, and using the difference in power consumption. This moves the measurement out of the problematic low-power zone.
Instruments with moving coils can be calibrated for direct current or power frequency currents up to a few hundred Hz. At radio frequencies a common method is a rectifier circuit arranged to respond to current in a transmission line; the system is calibrated for the known circuit impedance. Diode detectors are either directly connected to the source, or used with a sampling system that diverts only a portion of the RF power through the detector. Thermistors and thermocouples are used to measure heat produced by RF power and can be calibrated either directly or by comparison with a known reference source of power. A bolometer power sensor converts incident radio frequency power to heat. The sensor element is maintained at a constant temperature by a small direct current. The reduction in current required to maintain temperature is related to the incident RF power. Instruments of this type are used throughout the RF spectrum and can even measure visible light power. For high-power measurements, a calorimeter directly measures heat produced by RF power.
An instrument which measures electrical energy in watt hours (electricity meter or energy analyser) is essentially a wattmeter which accumulates or averages readings. Digital electronic instruments measure many parameters and can be used where a wattmeter is needed: volts, current,in amperes, apparent instantaneous power, actual power, power factor, energy in [k]W·h over a period of time, and cost of electricity consumed.
|Wikimedia Commons has media related to Wattmeters.|
- physics.kenyon.edu, Electrodynamometer
- US Lawrence Livermore laboratory, Standby Power, measuring standby
- Data listed in text from manual for inexpensive plug-in electricity meter Brennenstuhl PM230. The lowest measurable current is given as 0.02 A, which corresponds to about 5 W at 230 VAC
- Joeseph J. Carr, RF Components and Circuits, Newnes, 2002 ISBN 978-0-7506-4844-8 pages 351-370 | https://en.wikipedia.org/wiki/Wattmeter |
4.3125 | An Introduction to Solar System Astronomy
Prof. Richard Pogge, MTWThF 2:30
Lecture 20: Tides
Tides are caused by differences in the gravitational pulls of the Moon
and Sun between near and far sides of the Earth.
- Earth's Tidal Bulge
- Spring & Neap Tides
Tidal Effects in the Earth-Moon System:
- Tidal Locking of the Moon
- Tidal Braking slowing the Earth's Rotation
- Lunar Recession (increasing size of the Moon's orbit)
Ocean Tides are a familiar phenomenon to those who make their homes
near the sea:
- Sea level is highest twice a day at "high tide"
- Sea level is lowest twice a day at "low tide"
People near the sea quickly notice that the timing of the tides
was governed by the motions of the Moon:
This folk intuition is correct: Tides are in fact caused primarily
by the gravitational pull of the Moon.
- The time between successive high tides is 12h 25m
- The time between successive moonrises is 24h 50m, or
twice the time between high tides
The gravitational force exerted by the Moon on the near and far sides of
the Earth is different:
This causes a net front-to-back differential gravitational force felt by
- The Moon is 12740 km closer to the near side of the Earth than the
- This results in a 7% stronger gravitational force on the near side
compared to the far side
- Stretches the Earth along the Moon-Earth line
- Squeezes the Earth at right angles to this line
[Click on the image to view full size (6.1k GIF)]
The net result is 2 tidal bulges on opposite sides of the Earth, and so
2 tides per day as the Earth rotates through the Earth-Moon line.
Land and Sea Tides
How big is the Tidal Bulge of the Earth?
The main body of the Earth is made of rock, which is stiff and
resists deformation by tides.
- "Body Tides" on Earth are only about 30 centimeters high.
The oceans are made of water which is fluid and flows easily
in response to the tidal forces:
- Ocean Tides on Earth are about 1 meter high in the open sea
- Near the shore, tidal flows and the seafloor shape can
work together to produce much larger local tide
[Click on the image to view full size (16k GIF)]
Some of the most extreme ocean tides on Earth are observed in Canada's
Bay of Fundy between Nova Scotia
and New Brunswick. Here the shape of the bay leads to average high tides of
12 meters compared to low tide, with maximum high tides of up to 17 to
Gravity is a universal force, so tides are raised between any two bodies.
The Sun also raises tides on the Earth:
The Sun and Moon work together to give different kinds of tides and
different times of a Lunar Month.
- The difference between the gravity force on the day and night sides
of the Earth are about half that due to the Moon.
- The highest High Tides
- Occur during New Moon and Full Moon, when the Moon and Sun
are lined up with the Earth.
- The lowest High Tides
- Occur at First Quarter and Last Quarter phase,
when the Moon and Sun are at right angles seen from Earth.
Tidal Locking of the Moon
Similarly, the Earth raises tides on the Moon
The early Moon rotated much faster:
- The Earth is more massive, and the Moon's radius is smaller.
- Earth tides are ~20x stronger than Moon tides.
The end result is that the Moon got Tidally Locked in
- This means it was rotating through its tidal bulge.
- This generated tremendous internal friction, slowing
the Moon's rotation.
- Eventually, the Moon's rotation slowed until it matched
its orbital period, and the friction stopped.
This is why the Moon always keeps the same face towards the Earth,
as we saw back in Lecture 8.
Because the rotation and orbit periods are the same, we say that the
Moon is locked in a 1:1 Tidal Resonance with the Earth.
Tidal Braking of the Earth
The Earth rotates faster than the Moon orbits the Earth (24 hours compared
to 27 days).
There is therefore friction between the ocean and the seabed as the Earth
turns out from underneath the ocean tidal bulges.
- This drags the ocean bulge in the eastward direction of the Earth's
- Result is that ocean tides lead the Moon by about 10-degrees
[Click on the image to view full size (18k GIF)]
The friction from the ocean tides robs the Earth of rotational energy,
acting like brake pads.
This effect is known as Tidal Braking
- Slows the Earth's rotation a tiny amount.
- The length of the day is getting gradually longer by
about 2.3 milliseconds per century at the present time.
Another effect of the Tidal Braking is that the extra mass in the ocean
bulges leading the Moon causes a small net forward tug.
- Results in a net forward acceleration of the Moon
- Moves the Moon into a slightly larger orbit
This effect is known as Lunar Recession
- Steady increase in the average Earth-Moon distance by about 3.8 cm
The Lunar Recession rate is measurable using Laser Ranging experiments
that use retroreflector arrays left on the Moon by the Apollo missions
(Apollo 11, 14, and 15), and two Soviet landers (Lunakhod 1 and 2).
Telescopes on Earth bounce laser beams off the reflector arrays and measure
the distance to the Moon to millimeter precision.
The Once and Future Moon
Lunar Recession and Tidal Braking of the Earth's rotation are coupled: the
rotational energy being taken from the Earth in braking is effectively
being transferred, via tides, to the Moon. This extra energy lifts it
into a higher orbit.
As a result:
After many Billions of years, this will add up until:
- The length of the day has gotten longer at a rate of about 1.7 milliseconds
per century [see Note 20.3] averaged over
the past 2700 years.
- The Moon recedes by about 3.84 meters/century on average.
The Earth & Moon would then be locked together in a 1:1
Tidal Resonance, and always keep the same face towards each other.
- The Moon will be ~50% farther away from the Earth
- The Lunar Sidereal Month will be about 47 days long
- The Earth's rotation period (the day) will be 47 days long
Once the Earth and Moon are tidally locked, further tidal evolution
should stop. However, remember that the Earth and Moon orbit the Sun,
and so tidal effects from the Sun will come into play, and continue to
evolve the Earth-Moon system dynamically. The details are quite
complicated, and beyond the scope of this course (and, to be fair, even
the experts argue among themselves about the details - it is a difficult
Tidal phenomena are extremely important throughout the Solar System.
In the remainder of the class, we will often encounter examples of tides
playing a role in the dynamics of planets and their moons.
Tides are essential to understanding the dynamical evolution of the
- Tidal Resonances determining rotation periods (Moon & Mercury)
- Tidal Locking (Pluto & Charon system)
- Tidally-induced Heating (Io around Jupiter, and Triton around Neptune)
Return to [
Unit 4 Index
Astronomy 161 Main Page
Updated: 2007 October 14
Copyright © Richard W. Pogge,
All Rights Reserved. | http://www.astronomy.ohio-state.edu/~pogge/Ast161/Unit4/tides.html |
4.03125 | Children practice naming colors as they pass a pumpkin full of crayons.
Skill: Color Recognition
Large plastic pumpkin
One crayon per student — use a variety of colors
- In a large plastic pumpkin, place one crayon per child. Use a variety of colors.
- Children sit in a circle and pass the pumpkin while music plays.
- When the music stops, the child holding the pumpkin chooses a crayon and names the color.
- Continue until all have had a turn.
Related lesson plans:
- Pumpkin Investigation Share/BookmarkMake a book describing the inside and outside of a pumpkin using the five senses and observation skills. Objectives: use words to describe inside and outside of the pumpkin record descriptions on paper (in a pumpkin book) Materials: Pumpkin with...
- The Cheeto Walk Learning Game Share/BookmarkStudents learn number recognition, following simple rules, and listening skills. Objectives: Number recognition, following simple rules, and listening skills Materials: number cards made out of construction paper tape music tape recorder cheetos (or other edibles) small pieces of paper with...
- A Pumpkin Patch Art Activity Share/BookmarkA fun art activity that ties into a pumpkin theme. It would be a great follow up to a trip to the pumpkin patch! Materials: orange paint brown bags of all sizes green pipe cleaners black precut triangles and circles...
- Yummy Pumpkin Treats Snack Activity Share/BookmarkObjectives: After reading Pumpkin, Pumpkin by Jeanne Titherington we make Yummy Pumpkin Treats. This activity enhances math with counting and following directions. You will need: Pumpkin Pumpkin by Jeanne Titherington a pumpkin cookie cutter bread cream cheese raisins candy corn...
- Pumpkin Circuit Learning Centers Share/BookmarkSet up pumpkin activities in a circuit. Cooperative groups rotate from station to station. This activity may be planned during a unit on pumpkins. I usually plan it toward the end of my unit on pumpkins in late October. Materials:... | http://lessons.atozteacherstuff.com/1/pass-the-pumpkin-game/ |
4.40625 | What is a Pronoun?
In grammar, a pronoun is defined as a word or phrase that may be substituted for a noun or noun phrase, which once replaced, is known as the pronoun’s antecedent. How is this possible? In a nutshell, it’s because pronouns can do everything that nouns can do. A pronoun can act as a subject, direct object, indirect object, object of the preposition, and more.
Without pronouns, we’d have to keep on repeating nouns, and that would make our speech and writing repetitive, not to mention cumbersome. Most pronouns are very short words. Examples include:
As mentioned, pronouns are usually used to replace nouns, however they can also stand in for certain adverbs, adjectives, and other pronouns. Anytime you want to talk about a person, animal, place or thing, you can use pronouns to make your speech or writing flow better.
Types of Pronouns
Pronouns can be divided into numerous categories including:
- Indefinite pronouns – those referring to one or more unspecified objects, beings, or places
- Personal pronouns – those associated with a certain person, thing, or group; all except you have distinct forms that indicate singular or plural number
- Reflexive pronouns – those preceded by the adverb, adjective, pronoun, or noun to which they refer, and ending in –self or –selves
- Demonstrative pronouns – those used to point to something specific within a sentence
- Possessive pronouns – those designating possession or ownership
- Relative pronouns – those which refer to nouns mentioned previously, acting to introduce an adjective (relative) clause
- Interrogative pronouns – those which introduce a question
- Reciprocal pronouns – those expressing mutual actions or relationship; i.e. one another
- Intensive pronouns – those ending in –self or –selves and that serve to emphasize their antecedents
There are a few important rules for using pronouns. As you read through these rules and the examples in the next section, notice how the pronoun rules are followed. Soon you’ll see that pronouns are easy to work with.
- Subject pronouns may be used to begin sentences. For example: We did a great job.
- Subject pronouns may also be used to rename the subject. For example: It was she who decided we should go to Hawaii.
- Indefinite pronouns don’t have antecedents. They are capable of standing on their own. For example: No one likes the sound of fingernails on a chalkboard.
- Object pronouns are used as direct objects, indirect objects, and objects of prepositions. These include: you, me, him, her, us, them, and it. For example: David talked to her about the mistake.
- Possessive pronouns show ownership. They do not need apostrophes. For example: The cat washed its whiskers.
Examples of Pronouns
In the following examples, the pronouns are italicized.
- We are going on vacation.
- Don’t tell me that you can’t go with us.
- Anybody who says it won’t be fun has no clue what they are talking about.
- These are terribly steep stairs.
- We ran into each other at the mall.
- I’m not sure which is worse: rain or snow.
- It is one of the nicest Italian restaurants in town.
- Richard stared at himself in the mirror.
- The laundry isn’t going to do itself.
- Someone spilled orange juice all over the countertop!
The following exercises will help you gain greater understanding about how pronouns work. Choose the best answer to complete each sentence.
- This is __________ speaking.
- He john
- Greg is as smart as __________ is.
- The dog chewed on __________ favorite toy.
- it is
- It could have been __________ .
- more difficult
- Terry is taller than __________ am.
- B. This is he speaking.
- C. Greg is as smart as she is.
- D. The dog chewed on its favorite toy.
- B. It could have been anyone.
- A. Terry is taller than I am.
List of Pronouns
As you read through this list of pronouns, remember that each one of these pronouns is a word that can be used to take the place of a noun. Think about ways to use the pronouns on this list in sentences, as this will increase your understanding. | http://www.gingersoftware.com/content/grammar-rules/pronouns-2/ |
4.09375 | Listen to today's episode of StarDate on the web the same day it airs in high-quality streaming audio without any extra ads or announcements. Choose a $8 one-month pass, or listen every day for a year for just $30.
You are here
We Earthlings have a pleasant climate in part because we’re located just the right distance from the Sun. Unlike Venus, we’re not so close as to be too hot, and unlike Mars, we’re not so far as to be too cold.
But two scientists recently suggested that rocky planets like Earth can have a mild climate even if they’re many times farther from their parent stars than Earth is. That means that millions of life-supporting planets could exist at much greater distances from their stars than currently thought possible.
Earth owes much of its climate not just to its location but also to its atmosphere, which contains water vapor and carbon dioxide. These greenhouse gases trap solar heat and raise the average temperature above freezing.
At large distances from the Sun, though, both gases freeze and lose their power to warm the air. But one greenhouse gas stays a gas, even at the low temperatures that prevail far from the Sun: hydrogen, the lightest and most abundant element in the cosmos.
The scientists calculated that a rocky planet like Earth with a thick atmosphere of hydrogen could stay warm even if it were a billion miles from the Sun, which is about as far out as Saturn is. And despite the thick atmosphere, enough sunlight would filter through for plants to conduct photosynthesis.
In fact, if there are any astronomers on such a far-out world, they may wonder whether life could exist on a planet located a mere 93 million miles from its sun.
Script by Ken Croswell, Copyright 2011 | https://stardate.org/radio/program/far-out-earths |
4.25 | Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made.
Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students.
The esophagus (oesophagus) is an organ in vertebrates which consists of a muscular tube through which food passes from the pharynx to the stomach. During swallowing, food passes from the mouth through the pharynx into the esophagus and travels via peristalsis to the stomach. The word esophagus is derived from the Latin œsophagus, which derives from the Greek word oisophagos, lit. "entrance for eating. " In humans the esophagus is continuous with the laryngeal part of the pharynx at the level of the C6 vertebra. The esophagus passes through posteriormediastinum in thorax and enters abdomen through a hole in the diaphragm at the level of the tenth thoracic vertebrae (T10). It is usually about 10–50 cm long depending on individual height. It is divided into cervical, thoracic, and abdominal parts. Due to the inferior pharyngeal constrictor muscle, the entry to the esophagus opens only when swallowing or vomiting.
The layers of the oesophagus are as follows: mucosa; nonkeratinized stratified squamousepithelium (serves a protective effect due to the high volume transit of food, saliva, and mucus); lamina propria, muscularis mucosae: smooth muscle; submucosa (contains the mucous secreting glands esophageal glands, and connective structures termed papillae); and muscularis externa (or "muscularis propria").
Normally, the esophagus has three anatomic constrictions at the following levels: at the esophageal inlet, where the pharynx joins the esophagus, behind the cricoid cartilage (14–16 cm from the incisor teeth); where its anterior surface is crossed by the aortic arch and the left bronchus (25–27 cm from the incisor teeth); and where it pierces the diaphragm (36–38 cm from the incisor teeth). The distances from the incisor teeth are important as is useful for diagnostic endoscopic procedures.
The junction between the esophagus and the stomach (the gastroesophageal junction or GE junction) is not actually considered a valve, although it is sometimes called the cardiac sphincter, cardia or cardias, it actually better resembles a structure. In much of the gastrointestinal tract, smooth muscles contract in sequence to produce a peristaltic wave which forces a ball of food (called a bolus) while in the esophagus. In humans, peristalsis is found in the contraction of smooth muscles to propel contents through the digestive tract.
the esophagus deposits a bolus of food in the stomach at the gastroesophageal junction, the pharyngeal entry to the esophagus opens only when swallowing or vomiting, when doing endoscopy, length of the esophagus is measured from the incisor teeth, or food travels through the muscular tube of the esophagus via peristalisis | https://www.boundless.com/physiology/textbooks/boundless-anatomy-and-physiology-textbook/the-digestive-system-23/organs-of-the-digestive-system-220/esophagus-1079-2196/ |
4.125 | |By: John Khu|
Communication can be:
1. Verbal communication which requires language. A language is a system of arbitrary signals, such as voice sounds, gestures or written symbols which communicate thoughts or feelings
2. Non verbal communication, which does not need language to exchange ones thoughts. Silence is the best example. In certain contexts, silence can convey its own meaning, e.g. reverence, indifference, emotional coldness, rudeness, thoughtfulness, humility, aggressiveness. Silent communication shows more emotion than verbal. Non verbal communication includes gestures, body language, signs, symbols etc.
In everyday day life we come across various forms of communication. Between parties, communication content includes acts that declare knowledge and experiences, give advice and commands, and ask questions. These acts may take many forms, including gestures (nonverbal communication, sign language and body language), writing and speech .The form depends on the symbol systems used. Together, communication content and form make messages that are sent towards a destination. The target can be oneself, another person or another entity (such as a corporation or group
A particular instance of communication is called a speech act. A speech act typically follows a variation of logical means of delivery. The most common of these, and perhaps the best, is the dialogue. The dialogue is a form of communication where both the parties are involved in sending information. There are many other forms of communication but the reason the dialogue is good is because the dialogue lends itself to clearer communication due to feedback. (Feedback being encoded information, either verbal or nonverbal, sent back to the original sender (now the receiver) and then decoded.
Although we do not realize but in everyday we communicate with 10 to 1000 people in one way or the other may be directly or indirectly. All of us come across situations when things go wrong due to lack of communication. There can be various barriers in communication which may lead to such a situation.
Following factors can impede human communication
1. Not understanding the language
Verbal and non-verbal messages are in a different language. This includes not understanding the idioms used by another sub-culture or group. Not understanding the language also means that body language cannot be understood. One person may greet another person differently. If the two people do not understand each other then it can cause a rift in communication
2. Not understanding the context
Not knowing and or understanding the history of the occasion, relationship, or culture. Intent can be perceived differently by the receiver than what the sender intended.
Intentionally delivering an obscure or confusing message
Inadequate attention to processing a message. This is not limited to live conversations or broadcasts. Any person may improperly process any message if they do not focus adequately. Sometimes due to the "static", or real life events that cause distraction. This is why an interactive form of communication, one with lots of questions and answers for clarity, would be best so it is easier to stay involved in the message and to have less miscommunication.
So communication is an important activity in one's life. Whether humans or animals, everyone wants to share his feelings, his emotions, his thought, his ideas and hence develops one way or the other to communicate!
Streetdirectory.com, ranked Number 1 Travel Guide in Singapore provides a variety of customized Singapore street directory, Asia hotels, Singapore Images, Singapore real estate, Search for Singapore Private Limited Companies, Singapore Wine and Dine Guide, Bus Guide and S.E.A Travel Guide. Our travel guide includes Singapore Travel Guide, Bali Travel Guide, Bali Maps, Jakarta Travel Guide, KL Travel Guide, Malaysia Guide, Johor Guide, Malacca Guide and is widely used by travelers, expats and tourists around the world. Singapore Jobs | http://www.streetdirectory.com/travel_guide/187716/communications/what_are_the_important_aspects_of_communication.html |
4.0625 | In computing, a stateless protocol is a communications protocol that treats each request as an independent transaction that is unrelated to any previous request so that the communication consists of independent pairs of request and response. A stateless protocol does not require the server to retain session information or status about each communications partner for the duration of multiple requests. In contrast, a protocol which requires keeping of the internal state on the server is known as a stateful protocol.
Examples of stateless protocols include the Internet Protocol (IP) which is the foundation for the Internet, and the Hypertext Transfer Protocol (HTTP) which is the foundation of data communication for the World Wide Web.
The stateless design simplifies the server design because there is no need to dynamically allocate storage to deal with conversations in progress. If a client session dies in mid-transaction, no part of the system needs to be responsible for cleaning up the present state of the server. A disadvantage of statelessness is that it may be necessary to include additional information in every request, and this extra information will need to be interpreted by the server.
Contrast this with a traditional FTP server that conducts an interactive session with the user. During the session, a user is provided a means to be authenticated and set various variables (working directory, transfer mode), all stored on the server as part of the user's state.
Stacking of stateless and stateful protocol layers
There can be complex interactions between stateful and stateless protocols among different protocol layers. For example, HTTP is an example of a stateless protocol layered on top of TCP, a stateful protocol, which is layered on top of IP, another stateless protocol, which is routed on a network that employs BGP, another stateful protocol, to direct the IP packets riding on the network.
This stacking of layers continues even above HTTP. As a work-around for the lack of a session layer in HTTP, HTTP servers implement various session management methods, typically utilizing a unique identifier in a cookie or parameter that allows the server to track requests originating from the same client, and effectively creating a stateful protocol on top of HTTP.
- "RFC 7230 - Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing". ietf.org. Retrieved 20 August 2015.
- "session management methods reviewed". C cookie bits. Toronto. Retrieved 2011-04-12.
The following material is intended to introduce the reader to the various techniques that developers have used to implement session tracking on the Web. The main operational characteristics of each method are mentioned in addition to the shortcomings that have been observed in usage. Additional information on session management can be found by searching the Internet. […]
- This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.
|This computer networking article is a stub. You can help Wikipedia by expanding it.| | https://en.wikipedia.org/wiki/Stateless_server |
4 | Normally when you drop a drinking glass on the floor it shatters. But, in future, thanks to a technique developed in McGill's Department of Mechanical Engineering, when the same thing happens the glass is likely to simply bend and become slightly deformed. That's because Prof. François Barthelat and his team have successfully taken inspiration from the mechanics of natural structures like seashells in order to significantly increase the toughness of glass.
"Mollusk shells are made up of about 95 per cent chalk, which is very brittle in its pure form," says Barthelat. "But nacre, or mother-of-pearl, which coats the inner shells, is made up of microscopic tablets that are a bit like miniature Lego building blocks, is known to be extremely strong and tough, which is why people have been studying its structure for the past twenty years."
Previous attempts to recreate the structures of nacre have proved to be challenging, according to Barthelat. "Imagine trying to build a Lego wall with microscopic building blocks. It's not the easiest thing in the world." Instead, what he and his team chose to do was to study the internal 'weak' boundaries or edges to be found in natural materials like nacre and then use lasers to engrave networks of 3D micro-cracks in glass slides in order to create similar weak boundaries. The results were dramatic.
The researchers were able to increase the toughness of glass slides (the kind of glass rectangles that get put under microscopes) 200 times compared to non-engraved slides. By engraving networks of micro-cracks in configurations of wavy lines in shapes similar to the wavy edges of pieces in a jigsaw puzzle in the surface of borosilicate glass, they were able to stop the cracks from propagating and becoming larger. They then filled these micro-cracks with polyurethane, although according to Barthelat, this second process is not essential since the patterns of micro-cracks in themselves are sufficient to stop the glass from shattering.
The researchers worked with glass slides simply because they were accessible, but Barthelat believes that the process will be very easy to scale up to any size of glass sheet, since people are already engraving logos and patterns on glass panels. He and his team are excited about the work that lies ahead for them.
"What we know now is that we can toughen glass, or other materials, by using patterns of micro-cracks to guide larger cracks, and in the process absorb the energy from an impact," says Barthelat. "We chose to work with glass because we wanted to work with the archetypal brittle material. But we plan to go on to work with ceramics and polymers in future. Observing the natural world can clearly lead to improved man-made designs."
Cite This Page: | https://www.sciencedaily.com/releases/2014/01/140129114504.htm |
4.28125 | How to find the polar equation of a line through the origin (or pole).
How to write the parametric equations of a line segment that goes from point A to point B.
How to prove two triangles are similar using a line parallel to a base.
How to write equations describing motion in a straight line given the velocity and the position when t=0.
How to describe and label point, line, and plane. How to define coplanar and collinear.
How to write the equation of a graphed line.
How to prove that opposite angles in a cyclic quadrilateral are congruent; how to prove that parallel lines create congruent arcs in a circle.
How to graph a line using the x and y intercepts.
How to determine whether lines are parallel, perpendicular, or neither.
How to duplicate a line segment using a compass and straightedge.
Overview of correlation, scatterplots, and line of best fit
How to graph a line.
How to recognize when y = 0 is the horizontal asymptote of a rational function.
How to construct parallel lines using three different methods.
How to find the slope of a line if given a graph.
How to use the slope-intercept form of a line.
How to graph a line by making a table of values. | https://www.brightstorm.com/tag/horizontal-line/page/2 |
4.09375 | - Identify and use the ratios involved with right isosceles triangles.
Right Isosceles Triangles
What happens when you draw a diagonal across a square? Try it in the margin. →
You get two isosceles right triangles. Since a square has 4 right angles inside, 2 of them stay complete when you make the diagonal and the other 2 are cut in half. Each of the half angles are 45∘ (because 90∘÷2=45∘.)
- Each triangle has the angles 45∘,45∘ (from the two angles cut in half), and ___________.
The diagonal becomes the hypotenuse of each isosceles right triangle because it is across from the right angle. Since a square has 4 congruent sides, each triangle is isosceles where the legs are the congruent sides of the square.
- The diagonal of the square becomes the __________________________ of each triangle.
Each of these right triangles is a special right triangle called the 45∘−45∘−90∘ right triangles (because the angles inside the triangle are 45∘,45∘, and 90∘.)
As you know, isosceles triangles have two sides that are the same length. Additionally, the base angles of an isosceles triangle are congruent. An isosceles right triangle will always have base angles that each measure 45∘ and a vertex angle that measures 90∘.
In the diagram above, the ______________ and the base ________________ are each congruent.
Don’t forget that the base angles are the angles that are opposite the congruent sides. They don’t have to be on the bottom of the figure, like in the picture below:
The isosceles right triangle below has legs measuring 1 centimeter.
Use the Pythagorean Theorem to find the length of the hypotenuse.
Since the triangle is isosceles, the legs are 1 centimeter each. Substitute 1 for both a and b, and solve for c:
In this example, c=2√ cm.
What if each leg in the example above was 5 cm? Then we would have:
If each leg is 5 cm, then the hypotenuse is 52√ cm.
When the length of each leg was 1, the hypotenuse was 12√.
When the length of each leg was 5, the hypotenuse was 52√.
Is this a coincidence? No. Recall that the legs of all 45∘−45∘−90∘ triangles are proportional.
What does proportional mean?
You may recognize the word “proportion,” which means “ratio” or “fraction.”
“Proportional” describes a relationship between 2 values where you can multiply one of the values by some number and get the second value.
For instance, 3 and 6 have the same “proportional” relationship as 4 and 8, because you need to multiply the first number by 2 to get the second number in both cases.
Another pair of numbers with the same proportional relationship is ________ and _________.
Another example is the sentence: “Punishment should be proportional to the crime”.
This means that the worse a crime is, the harsher the punishment should be.
As we discovered in the examples on the previous page,
The hypotenuse of an isosceles right triangle will always equal the product of the length of one leg and 2√.
This means that if a leg has a length of x, then you can multiply the leg by the number 2√ to get the hypotenuse. So the hypotenuse has a length of x2√.
In all 45∘−45∘−90∘ triangles:
- The length of the ___________________________ equals 2√ times the length of a leg.
This relationship is very important to know!
What is the length of the hypotenuse in the triangle below?
We just learned a relationship between the leg and the hypotenuse of a 45∘−45∘−90∘ triangle, so this problem is much easier than using the Pythagorean Theorem again like in Example 1!
First, we must determine which side of the triangle is the hypotenuse.
Since the hypotenuse is the longest _____________ and it is across from the _______________ angle, it must be side c.
This makes the legs the other two sides, which have a length of _________________.
Since the length of the hypotenuse is the product of one leg and 2√, you can calculate this length (c) by multiplying the leg by 2√.
One leg is 4 inches, so the hypotenuse (c) will be 42√ inches, or about 5.66 inches.
1. Every isosceles right triangle has 3 special interior angles. What are they?
__________ , __________ , and __________
2. If an isosceles right triangle has legs that are 3 inches long, how long is its hypotenuse?
a. Draw a picture of the triangle here:
b. Use the Pythagorean Theorem to find the length of the hypotenuse (like in Example 1):
c. Use the special proportional relationship to find the length of the hypotenuse (like in Example 2):
d. Are your answers to (b.) and (c.) above the same?
Antonio built a square patio in his backyard.
He wants to make a water pipe for flowers that goes from one corner to another, diagonally. How long will that pipe be?
The first step in a word problem like this is to add important information to the drawing. Because the problem asks you to find the length from one corner to another, you should draw a diagonal line segment (from one corner of the square to the opposite corner) into your patio picture:
Once you draw the diagonal path, you can see how triangles help answer this question.
Because both legs of the triangle have the same measurement (17 feet), this is an isosceles right triangle. The angles in an isosceles right triangle are 45∘,45∘, and 90∘
In an isosceles right triangle, the hypotenuse is always equal to the product of the length of one leg and 2√. Just multiply these values together!
So, the length of Antonio’s water pipe will be the product of 17 and 2√, or 172√≈17⋅(1.414) feet. This value is approximately equal to 24.04 feet. Therefore, his diagonal water pipe should be 24.04 feet long.
You cook a grilled cheese sandwich. To make it easier to eat, you cut the sandwich in half diagonally. If each slice of bread (before it is cut) measures 14 cm by 14 cm, how long is the diagonal of your sandwich?
(Hint: draw yourself a picture to start this problem! If you are stuck, look at Example 3 to help you.) | http://www.ck12.org/book/CK-12-Foundation-and-Leadership-Public-Schools%252C-College-Access-Reader%253A-Geometry/r1/section/3.11/anchor-content/ |
4.09375 | These two different types of telescope date from a similar time period and were made by important instrument makers. They differ in their methods of focusing distant light from the stars: one uses lenses to refract (bend) incoming light, the other has mirrors to reflect light.
This telescope (Image 1) is a refracting telescope. Glass lenses are used to bring the light of distant objects into focus, magnifying them. Different colours of light are refracted (bent) through different angles. For this reason, images seen through a refracting telescope may suffer from a type of colourful distortion, known as chromatic aberration.
The body of the telescope with The maker's mark of Jesse Ramsden (1735-1800), a famous 18th-century astronomical instrument maker, is inscribed on the body of the telescope. A later handwritten label is stuck to the inside of the telescope's box, explaining that the instrument was collected for its beauty and rarity as well as its optical ability:
"This excellent little telescope was made by Mr Ramsden for the Honble Mr Stewart McKenzie
- only three of this size were ever made. It is the most complete portable instrument I have ever seen - beautifully brilliant as a day telescope - & shews double stars in the finest style."
"Stewart McKenzie" may have been James Stuart MacKenzie, (1719-1800) the politician and amateur astronomer. He was the brother of the Prime Minister John Stuart; the brothers' intimacy with the King was disliked by Members of Parliament. James McKenzie left politics in 1780 and dedicated himself to science.
One of the most prominent objects on display in the Main Gallery of the Whipple Museum is the 'Herschel' telescope (Image 2). It is a reflecting telescope, using curved and flat mirrors to reflect light and form a magnified image. As lenses are not used, reflecting telescopes do not suffer from chromatic aberration.
The telescope takes its name from William Herschel (1738-1822), who achieved public acclaim and royal favour through his discovery of the planet Uranus. He originally called the planet the Georgium Sidus (Latin for 'George's Star'), to honour King George III in 1781.
A few years later George III requested that Herschel make a number of telescopes. The Whipple Museum's example is one of five 10ft reflecting telescopes made in response to that request. Following Herschel's standard design, the King's cabinet maker constructed the mahogany stand and tube. Herschel made the optical parts himself.
The history of the Whipple's Herschel telescope has been well documented. George III presented it to George Spencer, the fourth Duke of Marlborough in 1786, saying "I can answer for the excellency of this instrument, having twice compared it to the one in my possession". It was held in the Observatory at Blenheim Palace until it was given to Herschel's great-grandson, Joseph Hardcastle (1868-1917). The Hardcastle family then sold it to Howard Marryat in 1927 who then gave it to Robert Whipple in 1944, to mark the foundation of his gift of 2000 scientific instruments and books to the University of Cambridge.
When the telescope was in the possession of Joseph Hardcastle he sent the mirrors to be examined by Sir Howard Grubb, who worked on the optics of periscopes for the Royal Navy during the first World War. In a letter written to Hardcastle he described the large relecting mirror's optics as 'good' (see Image 3 for a sketch drawn by Grubb). Rather than the aesthetic qualities of his great-grand father's telescope, Hardcastle was interested in how well the optics still worked.
James Hyslop, 'Two late 18th-century telescopes', Explore Whipple Collections, Whipple Museum of the History of Science, University of Cambridge, 2008 [http://www.hps.cam.ac.uk/whipple/explore/astronomy/twotelescopes/, accessed 14 February 2016] | http://www.hps.cam.ac.uk/whipple/explore/astronomy/twotelescopes/ |
4.09375 | Part of a series on the
|History of Libya|
The Latin name Libya (from Greek Λιβύη, Libyē) referred to the region west of the Nile Valley, generally corresponding to modern Northwest Africa. Its people were ancestors of the modern Berber people. Berbers occupied the area for thousands of years before the beginning of human records in Ancient Egypt. Climate changes affected the locations of the settlements. More narrowly, Libya could also refer to the country immediately west of Egypt, viz. Marmarica (Libya Inferior) and Cyrenaica (Libya Superior). The Libyan Sea or Mare Libycum was the part of the Mediterranean south of Crete, between Cyrene and Alexandria.
In the Greek period the Berbers were known as Libyans, a Greek term for the inhabitants of northwest Africa. Their lands were called Libya, and extended from modern Morocco to the western borders of Ancient Egypt. Modern Egypt contains the Siwa Oasis, historically part of Libya, where the Berber Siwi language is still spoken.
The name Libya (in use since 1934 for the modern country formerly known as Tripolitania and Barca) was the Latin designation for the region of Northwest Africa, from the Greek (Ancient Greek: Λιβύη Libúē, Λιβύᾱ Libúā, in the Attic and Doric dialects respectively).
In Classical Greece, the term had a broader meaning, encompassing the continent that later (2nd century BC) became known as Africa, which, in antiquity, was assumed to constitute one third[dubious ] of the world's land mass, besides Europe and Asia.
The Greek name is based on the ethnonym Libu (Ancient Greek: Λίβυες Líbues, Latin: Libyes). The land of the Libu was Λιβύη (Libúē) and Λιβύᾱ (Libúā) in the Attic and Doric dialects, respectively. These Libu have been attested since the Late Bronze Age as inhabiting the region (Egyptian: R'bw, Punic: 𐤋𐤁𐤉 lby). The oldest known references to the Libu date to Ramesses II and his successor Merneptah, Egyptian rulers of the nineteenth dynasty, during the 13th century BCE. LBW appears as an ethnic name on the Merneptah Stele.
Homer also names Libya, in Odyssey (IX.95; XXIII.311). Menelaus had travelled there on his way home from Troy; it was a land of wonderful richness, where the lambs have horns as soon as they are born, where ewes lamb three times a year and no shepherd ever goes short of milk, meat or cheese. Homer used the name in a geographic sense, while he called its inhabitants Lotophagi, meaning "Lotus-eaters". After Homer, Aeschylus, Pindar, and other Ancient Greek writers use the name. When Greeks actually settled in the real Libya in the 630s, the old name taken from Egyptians was applied by the Greeks of Cyrenaica, who may have co-existed with the Libu. Later, the name appeared in the Hebrew language, written in the Bible as Lehabim and Lubim, indicating the ethnic population and the geographic territory as well. Herodotus (1.46) used Λιβύη Libue to indicate the African continent; the Libues proper were the light-skinned North Africans, while those south of the Ancient Egypt (and Elephantine on the Nile) were known to him as "Aethiopians"; this was also the understanding of later Greek geographers such Diodorus, Strabo, Pliny the Elder, etc.
Latin absorbed the name from Greek and the Punic languages. The Romans would have known them before their colonization of North Africa, because of the Libyan role in the Punic wars against the Romans. The Romans used the name Libyes, but only when referring to Barca and the Western desert of Egypt. The other Libyan territories became known as Africa.
Classical Arabic literature called Libya Lubya, indicating a speculative territory west of Egypt. Modern Arabic uses 'Libya. Lwatae, the tribe of Ibn Battuta, as the Arabs called it, was a Berber tribe that mainly was situated in Cyrenaica. This tribe may have ranged from the Atlantic Ocean to modern Libya, however, and was referred to by Corippius as Laguatan; he linked them with the Maures. Ibn Khaldun reports, in The History of Ibn Khaldun, that Luwa was an ancestor of this previous tribe. He writes that the Berbers add an "a" and "t" to the name for the plural forms. Subsequently, it became Lwat.
Conversely, the Arabs adopted the name as a singular form, adding an "h" for the plural form in Arabic. Ibn Khaldun disagrees with Ibn Hazam, who claimed, mostly on the basis of Berber sources, that Lwatah, in addition to Sadrata and Mzata, were from the Qibts (Egyptians). According to Ibn Khaldun, this claim is incorrect because Ibn Hazam had not read the books of the Berber scholars.
Compared with the history of Egypt, historians know little about the history of Libya, as there are few surviving written records.
There were many Berber tribes in ancient Libya, including the now extinct Psylli, with the Libu being the most prominent. The ancient Libyans were mainly pastoral nomads, living off their goats, sheep and other livestock. Milk, meat, hides and wool were gathered from their livestock for food, tents and clothing. Ancient Egyptian sources describe Libyan men with long hair, braided and beaded, neatly parted from different sides and decorated with feathers attached to leather bands around the crown of the head while wearing thin robes of antelope hide, dyed and printed, crossing the shoulder and coming down until mid calf length to make a robe. Older men kept long braided beards. Women wore the same robes as men, plaited, decorated hair and both genders wore heavy jewelry. Weapons included bows and arrows, hatchets, spears and daggers.
Since Neolithic times, the climate of North Africa has become drier. A reminder of the desertification of the area is provided by megalithic remains, which occur in great variety of form and in vast numbers in presently arid and uninhabitable wastelands: dolmens and circles like Stonehenge, cairns, underground cells excavated in rock, barrows topped with huge slabs, and step-pyramid like mounds. Most remarkable are the trilithons, some still standing, some fallen, which occur isolated or in rows, and consist of two squared uprights standing on a common pedestal that supports a huge transverse beam. In the Terrgurt valley, Cowper says, "There had been originally no less than eighteen or twenty megalithic trilithons, in a line, each with its massive altar placed before it."
In ancient times, the Phoenicians and Carthaginians, Achaemenid Empire of Iran, the armies of Alexander the Great and his Ptolemaic successors from Egypt, then Romans, Vandals, and local representatives of the Byzantine Empire ruled all or parts of Libya. The territory of modern Libya had separate histories until Roman times, as Tripoli and Cyrenaica.
Cyrenaica, by contrast, was Greek before it was Roman. It was also known as Pentapolis, the "five cities" being Cyrene (near the village of Shahat) with its port of Apollonia (Marsa Susa), Arsinoe (Tocra), Berenice (Bengazi) and Barca (Merj). From the oldest and most famous of the Greek colonies the fertile coastal plain took the name of Cyrenaica.
These five cities were also known as the Western Pentapolis; not to be confused with the Pentapolis of the Roman era on the current west Italian coast.
The exact boundaries of Ancient Libya are unknown. It lay west of Ancient Egypt and was known as "Tjehenu" to the Ancient Egyptians. Libya was an unknown territory to the Egyptians: it was the lands of the spirits.
To the Ancient Greeks, Libya was one of the three known continents along with Asia and Europe. In this sense, Libya was the whole known African continent to the west of the Nile Valley and extended south of Egypt. Herodotus described the inhabitants of Libya as two peoples: The Libyans in northern Africa and the Ethiopians in the south. According to Herodotus, Libya began where ancient Egypt ended, and extended to Cape Spartel, south of Tangier on the Atlantic coast.
Modern geographers suspect severe climate change may have affected the ancient Libyans by causing loss of forests, reliable fresh water sources, and game availability as the area became more desert-like.
After the Egyptians, Greeks, Romans, and Byzantines mentioned various other tribes in Libya. Later tribal names differ from the Egyptian ones but, probably, some tribes were named in the Egyptian sources and the later ones, as well. The Meshwesh-tribe represents this assumption. Scholars believe it would be the same tribe called Mazyes by Hektaios and Maxyes by Herodotus, while it was called "Mazaces" and "Mazax" in Latin sources. All those names are similar to the name used by the Berbers for themselves, Imazighen.
Late period sources give more detailed descriptions of Libya and its inhabitants. The ancient historian Herodotus describes Libya and the Libyans in his fourth book, known as The Libyan Book. Pliny the Elder, Diodorus Siculus, and Procopius also contributed to what is now primary source material on ancient Libya and the Libyans.
Ibn Khaldun, who dedicated the main part of his book Kitab el'ibar, which is known as "The history of the Berbers", did not use the names Libya and Libyans, but instead used Arabic names: The Old Maghreb, (El-Maghrib el-Qadim), and the Berbers (El-Barbar or El-Barabera(h)).
Lake Tritonis divided the Berber cultures
Herodotus divided them into Eastern Libyans and Western Libyans. Eastern Libyans were nomadic shepherds east of Lake Tritonis. Western Libyans were sedentary farmers who lived west of Lake Tritonis. At one point, a catastrophic change reduced the vast body of fresh water to a seasonal lake or marsh.
Ibn Khaldun and Herodotus distinguish the Libyans on the basis of their lifestyles rather than ethnic background. Modern historians tend to follow Herodotus's distinction. Examples, Oric Bates in his book The Eastern Libyans. Some other historians have used the modern name of the Berbers in their works, such as the French historian Gabriel Camps.
The Libyan tribes mentioned in these sources were: "Adyrmachidae", "Giligamae", "Asbystae", "Marmaridae", "Auschisae", "Nasamones", "Macae", "Lotus-eaters (or Lotophagi)", "Garamantes", "Gaetulians", "Maures (Berbers)", and "Luwatae", as well as many others.
||This section may stray from the topic of the article. (July 2011)|
Ancient Libyans males primarily carried the E1b1b1a4 (V65) mutation, also found among populations in other parts of East Africa & the Mediterranean. E1b1b1b (M81) is also presumed to have expanded into Western Libya from Northwest Africa during the Capsian culture and is defined by the so-called "Berber marker" . Genetic studies (Cruciani et al. 2007) show that this double origin of male lineages is still present amongst modern Libyans, while it is absent amongst the indigenous Berbers of Northwest Africa, indicating a unique Ancient Libyan genetic makeup that possibly existed as far as the Neolithic age.
- North Africa during the Classical Period
- Northwest Africa
- History of North Africa
- Berber people
References and notes
- Gabriel Camps, L'origin des berbères
- Oliver, Roland & Fagan, Brian M. (1975) Africa in the Iron Age: c. 500 B.C. to A.D. 1400. Cambridge: Cambridge University Press; p. 47
- Gardiner, Alan Henderson (1964) Egypt of the Pharaohs: an introduction Oxford University Press, London, p. 273, ISBN 0-19-500267-9
- Fage, J. D. (ed.) (1978) "The Libyans" The Cambridge History of Africa: From c. 500 BC to AD 1050 volume II, Cambridge University Press, Cambridge, England, p. 141, ISBN 0-521-21592-7
- The Cambridge History of North Africa and the people between them as the Egyptians, p. 141.
- The full name of Ibn Battuta was Abu 'Abd Allah Muhammad ibn 'Abd Allah al-Lawati at-Tanji ibn Battuta
- The History of Ibn Khaldun, third chapter p. 184-258(Arabic)
- Bates, Oric (1914) The Eastern Libyans. London: Macmillan & Co. p. 57
- Chaker, Salem. "L'écriture libyco-berbère (The Libyco-Berber script)" (in French). Retrieved 5 December 2010.
- Chaker Script
- A Concise Dictionary of Middle Egyptian, Raymond O Faulkner, Page 306
- Bates, Oric
- Mohammed Chafik, Highlights of thirty-three centuries of Imazighen p. 9 .
- Ibn Khaldun, The History of Ibn Khaldun: The thirth[clarification needed] chapter p. 181-152.[clarification needed]
- Herodotus, On Libya, from The Histories, c. 430 BCE
- "Gabriel Camps is considered as the father of the North African prehistory, by founding d'Etude Berbère[clarification needed] at the University of Aix-en-Provence and the Encyclopédie berbère." (From the introduction of the English book The Berbers by Elizabeth Fentres and Michael Brett, p. 7).
- What Happened to the Ancient Libyans?, Chasing Sources across the Sahara from Herodotus to Ibn Khaldun by Richard L. Smith.
- Bunson, Margaret. "Libya." Encyclopedia of Ancient Egypt. New York: Facts on File, Inc., 1991
- Who Lived in Africa before the Roman Conquest? | https://en.wikipedia.org/wiki/Ancient_Libya |
4.15625 | If you’re a regular follower of NASA’s updates, you may have caught glimpses of some of the X-ray photos they report showing the surface of the sun. In these photos, dark specks of various sizes can be seen, which are actually what astronomers refer to as coronal holes. They may extend from the Sun’s equator to its poles, or even in some cases, from pole to pole. Recently, one of these coronal holes rotated towards Earth and chances had it that its one of the largest NASA astronomers have witnessed in a very long time. And what a sight it is!
Coronal holes aren’t your typical holes, mind you. A coronal hole, as the name implies, is a large region in the sun’s corona (the outer atmosphere of the sun), which is less dense and is cooler than its surrounds. This marvelous picture was taken by the Solar Dynamics Observatory’s Atmospheric Imaging Assembly, and was made by combining three wavelengths of UV light.
Are coronal holes dangerous? The short answer would be no. Coronal holes are the sources of solar wind gusts that travel through space and hit Earth’s magnetic field, causing marvelous spectacles of light called auroras. However, the same coronal holes spew solar gusts that cause geomagnetic storms, interfering with satellite communications. In general, geomagnetic storms originating from a coronal hole have a gradual commencement and are not as severe as storms caused by coronal mass ejections, which usually have a sudden onset."] | http://www.zmescience.com/other/great-pics/giant-coronal-hole-sun-photo-0542543/ |
4.0625 | Age and Times of Mars vs Earth
This learning module is meant for adaptation in an Earth science course where the geologic history of the Earth is discussed as well as the principles by which Earth's geologic history is defined and rock strata dated by relative dating techniques.
- Use principles of relative dating to interpret block diagrams, Earth outcrops, and Mars imagery
- Compare the geologic history of Earth and Mars
Context for Use
Make sure students have a basic understanding of lithologies in addition to the method of crater counting for dating and interpreting the ages of Martian terrain.
Description and Teaching Materials
Compiled In-Class Activities and Homework
- In-Class Activity 1: A Timescale Comparison
Teaching Notes and Tips
- Expose students to Crater Counting such that students realize the geologic timescale of Mars and dating of the Mars surface is based upon crater counting.
- For Homework 1 make sure students have a basic knowledge of lithology in order to interpret unconformities.
- Depending on class size, if possible, make copies of the geologic maps for students to use during In-Class Activity 1. If class sizes are larger than 30, include these maps in a course packet. Overhead projection of the maps may not be sufficient to engage fully in the activity.
- Homework 1 can be adapted for an in-class activity as desired.
References and Resources
- Image File: Age and Times of Mars vs. Earth (PowerPoint 2007 (.pptx) 389kB May28 13)
- YouTube video of the Noachian period on Mars (artist interpretation): http://www.youtube.com/watch?v=JfYIvkTQ2pc
- Simplified geologic map of the state of Utah: http://geology.utah.gov/maps/geomap/postcards/pdf/utgeo_postcd.pdf
- Geologic map of Mars: http://www.lpi.usra.edu/resources/mars_maps/1083/index.html | http://serc.carleton.edu/marsforearthlings/examples/geologichistory_Mars.html |
4.1875 | ATMOSPHERIC ROLE OF FORESTS: RAINFORESTS AND CLIMATE
Rainforests play the important role of locking up atmospheric carbon in their vegetation via photosynthesis. When forests are burned, degraded, or cleared, the opposite effect occurs: large amounts of carbon are released into the atmosphere as carbon dioxide along with other greenhouse gases (nitrous oxide, methane, and other nitrogen oxides). The clearing and burning of tropical forests and peatlands releases more than a billion metric tons of carbon (3.7 billion tons of carbon dioxide) into the atmosphere each year, or about more than ten percent of anthropogenic carbon emissions.
The buildup of carbon dioxide and other gases in the atmosphere is known as the "greenhouse effect." The accumulation of these gases is believed to have altered the earth's radiative balance, meaning more of the sun's heat is absorbed and trapped inside the earth's atmosphere, producing global warming. Greenhouse gases like carbon dioxide are transparent to incoming shortwave solar radiation. This radiation reaches the earth's surface, heats it, and re-radiates it as long-wave radiation. Greenhouse gases are opaque to long-wave radiation and therefore, heat is trapped in the atmosphere. As greenhouse gases build up, this opacity is increased and more heat is trapped in the atmosphere.
Gross annual carbon emissions resulting from gross forest cover loss, peat drainage and burning between 2000 and 2005 according to Harris et al 2012.
Deforestation accounts for 10 percent of global carbon emissions, argues new study
(June 21, 2012) Tropical deforestation accounted for 10 percent of global carbon dioxide emissions between 2000-2005 — a substantially smaller proportion than previously estimated — argues a new study published in Science.
The largest anthropogenic contributor to the greenhouse effect is carbon dioxide gas emissions, more than 85 percent of which comes from the combustion of fossil fuels (roughly one percent of emissions result from from energy-costly production activities like the manufacture of concrete, steel, and aluminum). The preindustrial atmospheric concentration of carbon dioxide was 280 ppm, though today levels have risen to 400 ppm, a 43 percent increase. Climatologists estimate that a level of 450 ppm—as projected for 2050—may result in an eventual 1.8-3 degrees Celsius (3.2-5.4 degrees Fahrenheit) increase in temperature. Some scientists predict that global warming will produce a sharp upswing
in global temperatures followed by a deep plunge into a glacial period several thousands years from now. However, there are still a lot of unknowns about the impact of climate change.
The extent and effect of global warming has been long debated by scientists, industries, and politicians. In 1995 leading scientists and the Intergovernmental
Panel on Climate Change (IPCC) concluded that global warming had been detected and that "the balance of evidence
suggests a discernible human influence on global climate." Their evidence included a 0.5-1F (0.3 to 0.6C)
increase in average global temperature since 1960, a 4.5F (2.5C) degree increase at the Earth's poles, the breaking up of
the Antarctic ice sheets, the receding of glaciers worldwide, the longest El Niño
a record number of hurricanes in 1995, a record number of heat waves, and an increase of epidemics attributed to
global climate change, including dengue fever, malaria, hanta virus, and the plague. According to scientists at
the National Oceanic and Atmospheric Administration, 1998 was the warmest year on record, although 2005 was a close second. A British study at the University of East Anglia suggested that 1998 may be the warmest year in
over 800 years. The 1990s have been the warmest decade of the millennium and the past decade has witnessed nine
of the eleven hottest years this century. In the 900 years before the twentieth century, temperatures dropped an
average of 0.02 degrees C (0.04 degrees F) per century.
Comparison of carbon emissions from six leading countries.
Projected carbon-dioxide emissions by country, 1990-2030.
Atmospheric CO2 Record from Mauna Loa, 1958-2004.
More climate and energy charts
Since 1960 atmospheric carbon-dioxide levels have increased from 313 ppm to 400 ppm (28 percent increase), according to measurements from Mauna Loa observatory, and carbon-dioxide levels are now 27 percent higher than at any point in the last 650,000 years. The Intergovernmental Panel on Climate Change (IPCC) projects that atmospheric carbon-dioxide levels could reach 450-550 ppm by 2050, possibly resulting in higher temperatures and rising sea levels, along with a myriad of potential impacts ranging from increased storm and hurricane intensity
]; melting of polar ice
], Arctic permafrost
, and glaciers
]; changes in ocean currents
including the Gulf Stream
; a rise in global sea levels
] which could inundate
low-elevation cities like Cairo, Venice, Lagos, New Orleans, and Amsterdam and cause problems
for low-lying nations
; increased coral bleaching
and mortality of reef ecosystems
; changes in ecosystems
; species migration
and mass extinction
, especially among cold climate species
; heightened danger from human pollutants like
; health impacts
including the spread of tropical disease into cooler climates
and range expansion of other pathogens
; and water shortages
Rising sea levels
The projected rise in sea level
from ocean-water expansion and ice melt varies depending on estimates of global warming
. But there is a good chance that oceans will rise from 10 inches (25 cm) to 20 inches (50 cm) within the next century if greenhouse gas emission rates continue at present levels. Such a
rise in sea level does not sound like much, but it would have profound effects on both humankind and natural systems.
Any sea-level increase would be magnified during tides, storm surges, and hurricanes
and could have a devastating
impact as shown by Category 3 Hurricane Katrina
in 2005. Island nations like the Maldives and scattered South Pacific republics face extinction.
The sea is a tremendously
important resource for man, and some of the world's largest cities lie along the coast for trade and commercial fishing. Any rise
in sea level would directly affect these metropolises, causing flooding and the potential disruption of sewage and transit systems,
along with inundating neighboring agricultural plots. A change in sea levels will also affect coastal ecosystems
like river deltas, wetlands, swamps, and low-lying forests, which play an important role in providing services
for mankind, in addition to housing biological diversity. Though sea levels have been higher in the past, today
there is less room for species affected by flooding, since buildings and concrete now occupy the areas that were
once extensions of their environment. Modern humankind is so dependent on existing conditions, that a change in
sea level, even if it is 10-20-inch (25-50 cm) will have a drastic effect on our society. Global warming is as
much a social problem
as it is an environmental one.
Changes in ecosystems
Scientists expect climate change to cause major shifts in species distribution and ecosystems, though there is still considerable debate over how climate change will affect specific ecosystems. Moderate climate warming simulations show that coral reefs will decline significantly over the next 50 years due to higher water temperatures and increased ocean acidity, and a similar fate will befall many organisms that form the base of the oceanic food chain. On land, permafrost across frozen landscapes may melt and give way to forest vegetation, while agricultural belts may move polewards. In the Amazon, temperatures are expected to climb, resulting in drier forests and expanded savanna. In Africa, climate change may disrupt regular seasonal weather patterns
over large regions of the continent, reducing rainfall in some areas while producing more rainfall in the drought-stricken Sahel region
The good news is that some carbon emissions can be canceled out by planting trees, which absorb carbon into their tissue through photosynthesis. Tropical forests have the best potential for the mitigation of greenhouse gases since have the greatest capacity to store carbon in their tissues as they
grow. Reforestation of 3.9 million square miles (10 million square km) could sequester 3.7-5.5 billion metric tons of carbon dioxide over the next 50-100 years.
Already a number of tree-planting projects specifically designated for carbon-emissions mitigation have been initiated around the world, including a proposal by a coalition of developing countries
at the 2005 UN climate conference in Montreal to seek compensation in the form of carbon payments for forest conservation
. This proposal has since developed into the so-called Reducing Emissions from Deforestation and Degradation or REDD+ mechanism, which is expected to mobilize tens of billions of dollars in carbon finance for tropical forest conservation. [Latest news on avoid deforestation
, carbon finance
, and REDD
While schemes like REDD+
could provide ways for poor tropical countries to capitalize on their natural assets without destroying them, the bad news is that even if carbon emissions are reversed today there is a lag time of around 50 years before the effects can be slowed, because of ocean thermal inertia, or their capacity to store heat. Thus the effects from past emissions are not entirely apparent today.
Lungs of the Earth
While the role of rainforests in oxygen generation is often overstated—more oxygen is produced by microorganisms in the world's oceans—tropical rainforests do add oxygen to the atmosphere as a by-product of photosynthesis. Some scientists estimate that 20 percent of the planet's oxygen is produced by rainforests.
Clearing rainforests diminishes the capacity of the global system to supply oxygen.
Aerial view of sections of rainforest felled for subsistence agriculture. (Photo by R. Butler)
- How does deforestation affect global warming?
- Why are rainforests called "the lungs of the world"?
Other versions of this page
Continued / Next: Extinction
Selection of information sources
The burning of forests releases almost one billion tons of carbon dioxide into the atmosphere each year according to T.E. Lovejoy in "Biodiversity: What is it?" in Biodiversity II, Reaka-Kudla, Wilson, Wilson, eds.., Washington D.C.: Joseph Henry Press, 1997. The role of deforestation in global warming is further discussed in Peters, R.L. and Lovejoy, T.E., eds. Global Warming and Biological Diversity, New Haven: Yale University Press 1992 and Shukla, J., Nobre, C., Sellers, P., "Amazon Deforestation and Climate Change," Science; 247: 1322-25, 1990.
In their paper, "Carbon Dioxide Fluxes in Moist and Dry Arctic Tundra during the Snow-free Season: Responses to Increases in Summer Temperature and Winter Snow Accumulation" (Arctic and Alpine Research Vol. 30, No. 4 (373-380), November 1998), Jones, M. H., J. T. Fahnestock, D. A. Walker, M. D. Walker, and J. M. Welker warn that higher temperatures resulting from global warming could result in higher levels of carbon dioxide being released into the atmosphere from arctic tundra.
E.J. Barron in "Climate Models: How Reliable are their Predictions?" Consequences Vol. 1 No. 3, 1995 describes the phenomenon of the cooling of the stratosphere during warming events.
Global carbon reserviors are given in Kasting, J.F., "The carbon cycle, climate, and the long-term effects of fossil fuel burning," Consequences Vol. 4, No. 1, 1998.
W.F. Laurance discusses die-off in forest fragments and the possibly effect on global climate in "Forest Fragmentation May Worsen Global Warming," Science 298: 1117-1118 1/5/98.
In "Tropical forestry practices for carbon sequestration: a review and case study from southeast Asia," Ambio Vol. 25 No. 4, June 1996, P.M. Costa notes that forest fragments store less carbon per unit of area than contiguous forest because fragments are often comprised of fast-growing tree species which store less carbon per volume than longer-lived trees.
M. McKloskey ("Note on the Fragmentation of Primary Rainforest," Ambio 22 (4), June: 250-51, 1993) provides the two-thirds figure for global fragmented rainforest.
In 1995 the Intergovernmental Panel on Climate Change (IPCC) released its report on climate change (Watson, R. T. et al., eds., Climate Change 1995: Impacts, Adaptations, and Mitigation of Climate Change: Scientific-Technical Analyses: Contribution of Panel on Climate Change) concluding "the balance of evidence suggests a discernible human influence of global climate."
Mann, M.E., Bradley, R.S. and Hughes, M.K. ("Northern Hemisphere Temperatures During the Past Millennium: Inferences, Uncertainties, and Limitations." Geophysical Research Letters, Vol. 26 (759-760), 1999) reported the NOAA's findings that 1998 was the warmest year on record. The same paper (picked up by the national press in "Report: 1990s warmest decade of millennium" Reuters 3/3/99) also reported that the 1990s have been the warmest decade of the millennium. J. Warrick in "Scientists See Weather Trend as Powerful Proof of Global Warming," The Washington Post 1/9/98 reported that the past decade has witnessed nine of the eleven hottest years this century.
The National Research Council of the National Academies (J.M. Wallace et al. Reconciling Observations of Global Temperature Change, National Research Council 2000) examined the apparent conflict between surface temperature and atmospheric temperature, which has led to the controversy over whether global warming is actually occurring and concluded that strong evidence exists to show that surface temperatures in the past two decades have risen at a rate substantially greater than average for the past 100 years. Angell, J.K. further discusses the discrepancies in "Comparison of surface and tropospheric temperature trends estimated from a 63-station radiosonde network, 1958-1998," Geophysical Research Letters, Vol. 26, No. 17 (2761-2764), Sep. 1, 1999.
L.D. Hatfield provided an excellent overview of the worldwide effects of el Niño in "An Ill Wind Blows in Again," San Francisco Examiner, 9/4/1997.
The National Oceanic and Atmospheric Administration 1997-2000 reports on the history, frequency, and duration of past el Niño (ENSO) events.
D.T. Rodbell looks at the history of ancient ENSO events in "An ~15,000-Year Record of El-Nino Driven Alluviation in Southwestern Ecuador," Science, Vol. 283 (516-519), 22-Jan-99.
Leighton, M. and Wirawan, N found a direct correlation between ENSO events and drought in Eastern Borneo in "Catastrophic Drought and Fire in Borneo Rain Forests Associated with the 1982-83 El Niño Southern Oscillation Event," in G.T. Prance, ed., Tropical Rain Forests and the World Atmosphere., Westview: Boulder, Colorado, 1986.
The Large Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) - sponsored by INPE (the Brazilian Institute for Space Research) 1997 - provided data for the global carbon emissions breakdown.
D. Holt-Biddle in "The Heat is On," Africa-Environment and Wildlife May/June Vol. 2 No. 3. 1994 notes the increase in atmospheric carbon dioxide levels over the past 150 years.
Martin and Lefebvre discuss the spread to tropical diseases into cooler climes in "Malaria and climate: sensitivity of malaria potential transmission to climate," Ambio Vol. 24 No. 4, June 1995.
Based a studies of ice cores from Greenland, Steig et al. ("Synchronous Climate Changes in Antarctica and the North Atlantic." Science October 2; 282: 92-95. 1998.) proposed that a chaotic temperature change in Greenland occurred at the end of the last Ice Ages. J. P. Severinghaus and E. J. Brook followed up with similar findings in "Abrupt Climate Change at the End of the Last Glacial Period Inferred from Trapped Air in Polar Ice," Science 1999 October 29; 286: 930-934.
K.Y. Vinnikov et al. ("Global Warming and Northern Hemisphere Sea Ice Extent," Science 1999 December 3; 286: 1934-1937) found ice in the Artic is shrinking by an average of 14,000 square miles per year and shrinkage is strongly correlated to greenhouse gas and aerosol emissions.
Mitigating carbon emissions by reforestation is reviewed in E.O. Wilson's The Diversity of Life (Belknap Press, Cambridge, Mass 1992.), Biotic Feedbacks in the Global Climatic System: Will the Warming Feed the Warming? ( New York: Oxford University Press 1995) by G.M. Woodwell and R.A. Mackenzie, eds., and Phillips, O.L. at al. "Changes in the carbon balances of tropical forests: Evidence from long-term plots." Science Vol. 282. October 1998. However this proposition has come under criticism of late by several important agencies including the International Geosphere-Biosphere Programme (IGBP) (B. Scholes, "Will the terrestrial carbon sink saturate soon?" Global Change NewsLetter No. 37:2-3, March 1999) and the Intergovernmental Pannel on Climate Change (R. Watson et al. IPCC Special Report on Land Use, Land Use Changes, and Forestry, 1999).
Parry, M. et al. ("Adapting to the Inevitable," Nature Vol. 395 22-Oct-1998 "(741)) conclude the cuts under the Kyoto Protocol would only shave off 0.1°F by 2050.
In "Bogging Down in the Sinks" (Worldwatch Nov/Dec 1998) A.T. Mattoon discusses some of the problems with forestry sinks under the Kyoto protocol.
Agricultural changes brought on by climate change are considered by R.C. Rockwell in "From a carbon economy to a mixed economy: a global opportunity," Consequences Vol. 4 No. 1, 1998 and at the Global Change and Terrestrial Ecosystems Focus 3 Confrence (1999). Several studies presented at this confrence suggest that grain grown under carbon dioxide enriched conditions maybe less nutritious than than grain grown under current conditions. This conference was arranged under the International Geosphere-Biosphere Programme (IGBP).
R. Monastersky in "Acclimating to a Warmer World," (Science News, Vol. 156. 28-Aug-99) reviews some of the pitfalls and windfalls from a warmer climate including an increase in number of "hot" days, sewage and transit problems, and lower heating bills.
A.E. Waibel et al. ("Arctic Ozone Loss Due to Denitrification," Science Vol. 283 No. 5410 (2064-2069), March 26, 1999) showed that global warming could slow the recovery of the ozone layer.
Houghton (Houghton, R.A. "Tropical deforestation and atmospheric carbon dioxide," in: Tropical Forests and Climate, ed. N. Myers., Dordrecht: Kluwer Academic Publishers, 1992 and Houghton, R.A., "Role of forests in global warming," in: World Forests for the Future: Their Use and Conservation, ed K. Ramakrishna and G.M. Woodwell, New Haven: Yale Univseristy Press, 1993) and Myers (Myers, N., "The world's forests: problems and potentials," Environmental Conservation. 23 (2), 1996) estimate carbon sequestration by the reforestation of 3.9 million square miles (10 million square km). | http://rainforests.mongabay.com/0907.htm |
4.1875 | In order to understand how helium has this effect on a voice, it is helpful to first consider how sound waves form and travel, as well as some basic properties of gases.
Sound waves are formed by the vibration of something (a drum-skin or your vocal chords, for instance) in a medium such as air. In the case of a drum, as one strikes its skin, it vibrates up and down. As it moves up, it pushes against the gas molecules of the air, forcing them upward against other molecules. The gas molecules are compressed together and this ripple of compressed molecules moves up away from the drum. Meanwhile, the drum skin moves down and back up again, resulting in another compression. This moving series of compressions is a sound wave, and the distance between them is known as the wavelength.
All gas samples have the same number of molecules per unit volume at a given pressure and temperature, whether the gas is helium or nitrogen (the primary constituent of air). But not all gas molecules have the same mass. Nitrogen (and thus air) has a mass roughly seven times greater than that of helium. Nitrogen is thus denser than helium and sound waves travel through it more slowly than they do in helium. At 20 degrees Celsius, for example, sound travels at 927 meters a second through helium, but only at 344 meters a second through air.
Like the vibration of a drum or a violin string, the vibration frequency of the vocal cords is independent of the type of gas that surrounds them. Whereas the velocity of the sound waves is faster in helium (and the wavelength greater), the frequency remains unchanged because it is determined by the vibrating vocal cords. Rather the timbre, or quality, of the sound changes in helium: listen closely next time and you will notice that a voice doesnt become squeaky but instead sounds more like Donald Duck. It is the lesser density of the helium--which serves as the medium for the sound waves--flowing through the larynx that produces this differing quality in the voice.
Answer originally posted on June 14, 2004. | http://www.scientificamerican.com/article/why-does-inhaling-helium/ |
4.125 | Set of classroom rules and procedures are vital for the effective classroom management. It is responsibility of teacher to establish some specific rules and regulations for students in the beginning so that students can clearly understand about the expectations of teachers. Display of classroom rules help your students to maintain expected behavior during class. Rules clearly describe the behavioral expectations of teachers from students and after reading these rules and regulations students can successfully develop a respectful behavior. Classroom rules can be meaningful if the students take proper part in the formation of classroom rules under the proper guidance of teacher. It is the responsibility of teachers to clearly explain all classroom rules to students so that they can follow them strictly. Here is a template for Classroom Rules for academic activities,
Importance of Classroom Rules
Usually teacher devices important classroom rules at the beginning of school years and displayed them at the back of classroom. It is wrong place for the display of classroom rules because these rules can be excellent tools to management your class. These rules should be displayed effectively on a large poster board in front of classroom so that students remain aware about them. It is necessary to write all classroom rules in clear and readable manner with the help of dark color markers so that everyone can read them. With proper display of classroom rules, you can enjoy following benefits:
- By displaying classroom rules on proper place, you can reinforce students to follow these rules. Some rules are related to the safety of children in the classroom and by writing these rules on display board you can alert students about current happenings.
- Classroom rules create discipline and self-control qualities in students therefore it is important to display them at right place for the whole year. These rules can eliminate stress level by informing students what is the expectations of teacher from them.
- In the present of well formatted classroom rules, you class can be a more pleasant place and students can perform well in all learning as well as co-curricular activities.
- Classroom rules help teachers to develop consistent, fair and friendly environment for students that is really necessary for the proper mental and physical growth of a student.
- It will help students to behave as per the expectations on teacher during and after class. They always remain careful and alert as they know about the expected punishments in case of any disobedience.
- Classroom rules develop positive and friendly behavior in the class because every student tries to be a role model for his/her teachers in the class. Classroom rules should be displayed for the whole academic year to serve as constant reminder for students.
I hope above mentioned points will help you to understand the importance of properly displaying classroom rules in school.
Here is download link, | http://www.bluelayouts.org/template/724.html |
4.125 | Our understanding of climate change began with intense debates amongst 19th century scientists about whether northern Europe had been covered by ice thousands of years ago. In the 1820s Jean Baptiste Joseph Fourier discovered that "greenhouse gasses" trap heat radiated from the Earth's surface after it has absorbed energy from the sun. In 1859 John Tyndall suggested that ice ages were caused by a decrease in the amount of atmospheric carbon dioxide. In 1896 Svente Arrhenius showed that doubling the carbon dioxide content of the air would gradually raise global temperatures by 5-6C - a remarkably prescient result that was virtually ignored by scientists obsessed with explaining the ice ages.
The idea of global warming languished until 1938, when Guy S Callender suggested that the warming trend revealed in the 19th century had been caused by a 10% increase in atmospheric carbon dioxide from the burning of fossil fuels. At this point scientists were not alarmed, as they were confident that most of the carbon dioxide emitted by humans had dissolved safely in the oceans. However, this notion was dispelled in 1957 by Hans Suess and Roger Revelle, who discovered a complex chemical buffering system which prevents sea water from holding on to much atmospheric carbon dioxide.
The possibility that humans could contribute to global warming was now being taken seriously by scientists, and by the early 1960s some had begun to raise the spectre of severe climate change within a century. They had started to collect evidence to test the idea that global temperatures were increasing alongside greenhouse gas emissions, and to construct mathematical models to predict future climates.
In 1958 Charles Keeling began long-term measurements of atmospheric carbon dioxide at the Mauna Loa observatory in Hawaii. Looked at now, the figures show an indisputable annual increase, with roughly 30% more of the gas relative to pre-industrial levels in today's atmosphere - higher than at any time in the last 700,000 years. Temperature readings reveal an average warming of 0.5-0.6C over the last 150 years.
Climate change sceptics have pointed out that these records could have been due to creeping urbanisation around weather stations, but it is now widely accepted that this 'urban heat island effect' is relatively unimportant and that it doesn't explain why most of the warming has been detected far away from cities, over the oceans and the poles.
Since the 1960s, evidence of global warming has continued to accumulate. In 1998 Michael Mann and colleagues published a detailed analysis of global average temperature over the last millennium known as the "hockey stick graph", revealing a rapid temperature increase since the industrial revolution. Despite concerted efforts to find fault with Mann's methodology, his basic result is now accepted as sound. Then, in 2005, just as the Kyoto Protocol for limiting greenhouse gas emissions was ratified, James Hansen and his team detected a dramatic warming of the world's oceans - just as expected in a warming world.
There is now little doubt that the temperature increase over the last 150 years is real, but debate still surrounds the causes. We know that the warming during the first half of the last century was almost certainly due to a more vigorous output of solar energy, and some scientists have suggested that increased solar activity and greater volcanic emissions of carbon dioxide are responsible for all of the increase. But others point out that during the last 50 years the sun and volcanoes have been less active and could not have caused the warming over that period.
By 2005 a widespread scientific consensus had emerged that serious, large-scale disruption could occur around 2050, once average global temperature increase exceeds about 2C, leading to abrupt and irreversible changes. These include the melting of a large proportion of the Greenland ice cap (now already under way), the reconfiguration of the global oceanic circulation, the disappearance of the Amazon forest, the emission of methane from permafrost and undersea methane hydrates, and the release of carbon dioxide from soils.
This new theory of "abrupt climate change" has overturned earlier predictions of gradual change, and has prompted some scientists to warn that unmitigated climate change could lead to the complete collapse of civilisation. Fears have been fuelled by the possibility that smoke, hazes and particles from burning vegetation and fossil fuels could be masking global warming by bouncing solar energy back to space. This "global dimming" effect is diminishing as we clean up air pollution. As a result global average temperature could rise by as much as 10 degrees Celsius by the close of the century - a catastrophic increase.
A more conservative assessment by the Intergovernmental Panel on Climate Change (IPCC) in 2001 indicated that with unabated carbon emissions, global temperature could rise gradually to around 5.8C by 2100. An increase of this nature would still threaten the lives of millions of people, particularly in the global south, due to sea level rise and extreme weather events.
Although some people still deny that climate change is a problem we can do something about, last year the UK government indicated that it was on board. The Stern Review showed that without immediate and relatively inexpensive action, climate change would lead to severe and permanent global economic depression by 2050. There is now a strong scientific and economic consensus about the severity of the climate crisis.
· Stephan Harding is Coordinator of the MSc in Holistic Science at Schumacher College in Devon, UK. He is the author of Animate Earth: Science, Intuition and Gaia. To order a copy for £9.95 with free UK p&p call 0870 836 0875 or go to theguardian.com/bookshop. | http://www.theguardian.com/environment/2007/jan/08/climatechange.climatechangeenvironment?INTCMP=ILCNETTXT3487 |
4.03125 | You are here:
Other Fact Sheets
NIAMS Kids Pages
Healthy Muscles Matter
Basic facts about muscles
Did you know you have more than 600 muscles in your body? These muscles help you move, lift things, pump blood through your body, and even help you breathe.
When you think about your muscles, you probably think most about the ones you can control. These are your voluntary (VOL-uhn-ter-ee) muscles, which means you can control their movements. They are also called skeletal (SKEL-i-tl) muscles, because they attach to your bones and work together with your bones to help you walk, run, pick up things, play an instrument, throw a baseball, kick a soccer ball, push a lawnmower, or ride a bicycle. The muscles of your mouth and throat even help you talk!
Keeping your muscles healthy will help you to be able to walk, run, jump, lift things, play sports, and do all the other things you love to do. Exercising, getting enough rest, and eating a balanced diet will help to keep your muscles healthy for life.
Why healthy muscles matter to you
Healthy muscles let you move freely and keep your body strong.
Healthy muscles let you move freely and keep your body strong. They help you to enjoy playing sports, dancing, walking the dog, swimming, and other fun activities. And they help you do those other (not so fun) things that you have to do, like making the bed, vacuuming the carpet, or mowing the lawn.
Strong muscles also help to keep your joints in good shape. If the muscles around your knee, for example, get weak, you may be more likely to injure that knee. Strong muscles also help you keep your balance, so you are less likely to slip or fall.
And remember—the activities that make your skeletal muscles strong will also help to keep your heart muscle strong!
Different kinds of muscles have different jobs
Skeletal muscles are connected to your bones by tough cords of tissue called tendons (TEN-duhns). As the muscle contracts, it pulls on the tendon, which moves the bone. Bones are connected to other bones by ligaments (LIG-uh-muhnts), which are like tendons and help hold your skeleton together.
A joint showing muscles, ligaments, and tendons. (Representation)
Smooth muscles are also called involuntary muscles since you have no control over them. Smooth muscles work in your digestive system to move food along and push waste out of your body. They also help keep your eyes focused without your having to think about it.
Cardiac (KAR-dee-ak) muscle. Did you know your heart is also a muscle? It is a specialized type of involuntary muscle. It pumps blood through your body, changing its speed to keep up with the demands you put on it. It pumps more slowly when you’re sitting or lying down, and faster when you’re running or playing sports and your skeletal muscles need more blood to help them do their work.
What can go wrong?
Almost everyone has had sore muscles after exercising or working too much. Some soreness can be a normal part of healthy exercise. But, in other cases, muscles can become strained. Muscle strain (streyn) can be mild (the muscle has just been stretched too much) to severe (the muscle actually tears). Maybe you lifted something that was too heavy and the muscles in your arms were stretched too far. Lifting heavy things in the wrong way can also strain the muscles in your back. This can be very painful and can even cause an injury that will last a long time and make it hard to do everyday things.
The tendons that connect the muscles to the bones can also be strained if they are pulled or stretched too much. If ligaments (remember, they connect bones to bones) are stretched or pulled too much, the injury is called a sprain (spreyn). Most people are familiar with the pain of a sprained ankle.
Contact sports like soccer, football, hockey, and wrestling can often cause strains. Sports in which you grip something (like gymnastics or tennis) can lead to strains in your hand or forearm.
How do I keep my muscles healthy?
Muscles that are not used will get smaller and weaker. This is known as atrophy.
When you make your muscles work by being physically active, they respond by growing stronger. They may even get bigger by adding more muscle tissue. This is how bodybuilders get such big muscles, but your muscles can be healthy without getting that big.
There are lots of activities you can do for your muscles. Walking, jogging, lifting weights, playing tennis, climbing stairs, jumping, and dancing are all good ways to exercise your muscles. Swimming and biking will also give your muscles a good workout. It’s important to get different kinds of activities to work all your muscles. And any activity that makes you breathe harder and faster will help exercise that important heart muscle as well!
Get 60 minutes of physical activity every day.
Get 60 minutes of physical activity every day. It doesn’t have to be all at once, but it does need to be in at least 10-minute increments to count toward your 60 minutes of physical activity per day.
Eat a healthy diet
You really don’t need a special diet to keep your muscles in good health. Eating a balanced diet will help manage your weight and provide a variety of nutrients for your muscles and overall health. A balanced diet:
- Emphasizes fruits, vegetables, whole grains, and fat-free or low-fat dairy products like milk, cheese, and yogurt.
- Includes protein from lean meats, poultry, seafood, beans, eggs, and nuts.
- Is low in solid fats, saturated fats, cholesterol, salt (sodium), added sugars, and refined grains.
- Is as low as possible in trans fats.
- Balances calories taken in through food with calories burned in physical activity to help maintain a healthy weight.
As you grow and become an adult, iron is an important nutrient, especially for girls. Not getting enough iron can cause anemia (uh-NEE-me-uh), which can make you feel weak and tired because your muscles don’t get enough oxygen. This can also keep you from getting enough activity to keep your muscles healthy. You can get iron from foods like lean beef, chicken and turkey; beans and peas; spinach; and iron-enriched breads and cereals. You can also get iron from dietary supplements, but it’s always good to check with a doctor first.
Some people think that supplements will make their muscles bigger and stronger. However, supplements like creatine can cause serious side effects, and protein and amino acid supplements are no better than getting protein from your food. Using steroids to increase your muscles is illegal (unless a doctor has prescribed them for a medical problem), and can have dangerous side effects. No muscle-building supplement can take the place of good nutrition and proper training.
For more information on a healthy diet, see www.choosemyplate.gov/food-groups/.
To help prevent sprains, strains, and other muscle injuries:
- Warm up and cool down. Before exercising or playing sports, warm-up exercises, such as stretching and light jogging, may make it less likely that you’ll strain a muscle. They are called warm-up exercises because they make the muscles warmer—and more flexible. Cool-down exercises loosen muscles that have tightened during exercise.
- Wear the proper protective gear for your sport, for example pads or helmets. This will help reduce your risk for injuring your muscles or joints.
- Remember to drink lots of water while you’re playing or exercising, especially in warm weather. If your body’s water level gets too low (dehydration),you could get dizzy or even pass out. Dehydrationcan cause many medical problems.
Don’t try to “play through the pain.” If something starts to hurt, STOP exercising or playing. You might need to see a doctor, or you might just need to rest the injured part for a while.
Don’t try to “play through the pain.”
- If you have been inactive, “start low and go slow” by gradually increasing how often and how long activities are done. Increase physical activity gradually over time.
- Be careful when you lift heavy objects. Keep your back straight and bend your knees to lift the object. This will protect the muscles in your back and put most of the weight on the strong muscles in your legs. Get someone to help you lift something heavy.
Keeping your muscles healthy will help you have more fun and enjoy the things you do. Healthy muscles will help you look your best and feel full of energy. Start good habits now, while you are young, and you’ll have a better chance of keeping your muscles healthy for the rest of your life.
Anemia. Anemia (uh-NEE-me-uh) is a condition in which your blood has a lower than normal number of red blood cells.
Atrophy (A-truh-fee). Wasting away of the body or of an organ or part, as from deficient nutrition, nerve damage, or lack of use.
Cardiac (KAR-dee-ak) muscle. The heart muscle. An involuntary muscle over which you have no control.
Dehydration (dee-hahy-DREY-shun). A condition that occurs when you lose more fluids than you take in. Your body is about two-thirds water. When you get dehydrated, it means the amount of water in your body has dropped below the level needed for normal body function.
Involuntary muscle. Muscles that you cannot control.
Ligament (LIG-uh-muhnt). Tough cords of tissue that connect bones to other bones at a joint.
Skeletal (SKEL-i-tl) muscle. Muscles that attach to bones.
Sprain (spreyn). A stretched or torn ligament. Ankle and wrist sprains are common. Symptoms include pain, swelling, bruising, and being unable to move the joint.
Strain (streyn). A stretched or torn muscle or tendon. Twisting or pulling these tissues can cause a strain. Strains can happen suddenly or develop over time. Back and hamstring muscle strains are common. Many people get strains playing sports.
Tendon (TEN-duhn). Tough cords of tissue that connect muscles to bones.
Voluntary (VOL-uhn-ter-ee) muscles. Muscles that you can control.
For more information
National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS)
National Institutes of Health
This fact sheet was made for you by the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), a part of the U.S. Department of Health and Human Services’ National Institutes of Health. For more information about the NIAMS, call the information clearinghouse at 301-495-4484 or toll free at 877-22-NIAMS (226-4267) or visit the NIAMS website at www.niams.nih.gov.
NIH Publication No. 15-7579(M) | http://www.niams.nih.gov/health_info/Kids/healthy_muscles.asp |
4.125 | Suppose you were traveling on a paddle boat at a constant speed. In 6 minutes, you traveled x meters, and in 10 minutes, you traveled x+4 meters. Could you find the value of x in this scenario? If so, how would you do it? After completing this Concept, you'll be able to solve rational equations using proportions so that you can handle this type of problem.
Solution of Rational Equations
You are now ready to solve rational equations! There are two main methods you will learn to solve rational equations:
- Cross products
- Lowest common denominators
In this Concept you will learn how to solve using cross products.
Solving a Rational Proportion
When two rational expressions are equal, a proportion is created and can be solved using its cross products.
For example, to solve x5=(x+1)2, cross multiply and the products are equal.
Solve for x:
Notice that this equation has a degree of two; that is, it is a quadratic equation. We can solve it using the quadratic formula.
x=5±185−−−√4⇒x≈−2.15 or x≈4.65
Start by cross multiplying:
Since this equation has a squared term as its highest power, it is a quadratic equation. We can solve this by using the quadratic formula, or by factoring.
1. Since there are no common factors, start by finding the product of the coefficient in front of the squared term and the constant:
2. What factors of -6 add up to 5? That would be -6 and 1, since -6+1=-5.
3. Factor, beginning by breaking up the middle term, −5x, as above:
4. Use the Zero Product Principle:
(3x+1)(x−2)=0⇒3x+1=0 or x−2=0⇒x=−13 or x=2
Cross multiply:Set one side equal to zero to get a quadratic equation:Simplify by distributing:Factor by determining −16=8⋅−2 and 6=8+(−2):Use the zero product principle:−x2=3x−8x⇒x20000=−2(3x−8)=x2+2(3x−8)=x2+6x−16=x2−2x+8x−16=x(x−2)+8(x−2)=(x+8)(x−2)=(x+8)(x−2)⇒x+8=0 or x−2=0⇒x=−8 or x=2
Sample explanations for some of the practice exercises below are available by viewing the following videos. Note that there is not always a match between the number of the practice exercise in the videos and the number of the practice exercise listed in the following exercise set. However, the practice exercise is the same in both. CK-12 Basic Algebra: Solving Rational Equations (12:57)
Solve the following equations.
- Divide: −2910÷−158.
- Solve for g:−1.5(−345+g)=20120.
- Find the discriminant of 6x2+3x+4=0 and determine the nature of the roots.
- Simplify 6b2b+2+3.
- Simplify 82x−4−5xx−5.
- Divide: (7x2+16x−10)÷(x+3).
- Simplify (n−1)∗(3n+2)(n−4).
Answers for Explore More Problems
To view the Explore More answers, open this PDF file and look for section 12.8. | http://www.ck12.org/book/CK-12-Basic-Algebra-Concepts/section/12.8/ |
4.21875 | Synopses & Reviews
- Pronunciation guides to less familiar words.
- Includes chart that explains the game of dreidel.
- Holiday books are always in demand in schools and libraries, but the controlled vocabulary of Rookie Books make them especially marketable.
- Excellent resource for classroom unit on holidays.
- Text has fewer than 400 words.
- Word list.
Grades K-4 Social Studies Standards
- People, societies, and cultures address needs and concerns in ways that are both similar and different
- Language, folktales, music, and art serve as expressions of culture
- Cultural unity and diversity can be identified within and across groups
Time, Continuity, and Change: II
- Accounts of past events, people, places, and situations contribute to our understanding of the past
Global Connections: IX
- Explore ways in which language, the arts, beliefs, etc. facilitate global understanding or lead to misunderstanding | http://www.powells.com/book/diwali-9780531118351 |
4.03125 | CAESER CIPHER – THE SHIFT CIPHER
2. History & Development
3. How It Works
4. C++ Source Code [Encryption]
5. C++ Source Code [Decryption]
6. Step By Step Explanation [Decryption]
7. Step By Step Explanation [Decryption]
8. Pros & Cons
In cryptography, a Caesar cipher, also known as Caesar's cipher, the shift cipher, Caesar's code or Caesar shift, is one of the simplest and most widely known encryption techniques. It is a type of substitution cipher in which each letter in the plaintext is replaced by a letter some fixed number of positions down the alphabet. For example, with a left shift of 3, D would be replaced by A, E would become B, and so on. The method is named after Julius Caesar, who used it in his private correspondence.
The encryption step performed by a Caesar cipher is often incorporated as part of more complex schemes, such as the Vigenère cipher, and still has modern application in the ROT13 system. As with all single alphabet substitution ciphers, the Caesar cipher is easily broken and in modern practice offers essentially no communication security.
HISTORY & DEVELOPMENT
The cipher was named after Julius Caesar. It was used during 50BC by notable Romans including Julius Caesar. In cryptography, the Caesar cipher is also known as Caesar’s shift or the Shirt Cipher. Julius Caesar used the Caesar cipher to communicate with his generals during military campaigns to protect and encrypt messages that were important to the military and the government.
The Caesar Cipher is a type of substitution cipher. Each letter in the plaintext is replaced by a letter some fixed number of positions further down the alphabet. The Caesar cipher is a “monoalphabetic substitution cipher”, meaning only one letter is assigned to the alphabet it is supposed to represent.
According to Suetonius, who wrote the book Life of Julius Caesar, Julius Caesar used the substitution cipher to a shift of three; meaning shifted each letter 3 places further through the alphabet, a(plaintext) becomes D(ciphertext).
“If he had anything confidential to say, he wrote it in cipher, that is, by so changing the order of the letters of the alphabet, which not a word could be made out. If anyone wishes to decipher these, and get at their meaning, he must substitute the fourth letter of the alphabet, namely D, for A, and so with the others." (Suetonius, Life of Julius Caesar, 56)
Julius Caesar’s nephew, Augustus, also used the Caesar cipher. However, he used it to the shift of one letter only. According to David Kahn’s book, The Code Breakers, lovers used the Caesar code to communicate secretly. In the Caesar Cipher, the plaintext is usually in lower case while the cipher text is in upper case.
The ROT13 is an application of the Caesar cipher. ROT13 replaces each letter by its partner 13 characters further down the alphabet. As the alphabet consist of 26 letters, the “ROT13 function is its own inverse”, meaning C becomes P and P becomes C.
HOW IT WORKS
The Caesar Cipher replaces each letter in the plain text (the alphabet) with a letter that has a fixed number of places down the alphabet. For example, the diagram below has defined its parameters with a shift of 3 also known as the Caesar shift.
As such, the letter B in the plaintext becomes E in the ciphertext. Here is a sample of the revolvable cipher which makes encryption much more convenient by turning the inner and outer wheel.
The outer wheel is the original alphabet or plaintext and the inner wheel is the ciphertext which can be adjusted accordingly.
To encrypt a phrase or word, we could use the table above, where the letters in the dark blue boxes represent the plaintext or the alphabet while the letters in the boxes shaded light blue is the cipher text according to Caesar’s Shift of 3. Therefore, to... | http://www.studymode.com/essays/Caeser-Cipher-The-Shift-Cipher-1781433.html |
4.03125 | If you're seeing this message, it means we're having trouble loading external resources for Khan Academy.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
Analyzing the matrix arithmetic operations
Learn about properties of matrix addition and multiplication, like commutativity, the associative property, and the distributive property.
Sal discusses the conditions of matrix dimensions for which addition or multiplication are defined.
Sal checks whether the commutative property applies for matrix multiplication. In other words, he checks whether for any two matrices A and B, A*B=B*A (the answer is NO, by the way).
Sal shows that matrix multiplication is associative. Mathematically, this means that for any three matrices A, B, and C, (A*B)*C=A*(B*C).
Sal determines which of a few optional matrix expressions is equivalent to the matrix expression A*B*C. This is done using what we know about the properties of matrix addition and multiplication. | https://www.khanacademy.org/math/precalculus/precalc-matrices/analyzing-matrix-operations |
4.40625 | Scanning electron microscope image of Vibrio cholerae
|Classification and external resources|
Cholera is an infection of the small intestine by some strains of the bacterium Vibrio cholerae. Symptoms may range from none, to mild, to severe. The classic symptom is large amounts of watery diarrhea that lasts a few days. Vomiting and muscle cramps may also occur. Diarrhea can be so severe that it leads within hours to severe dehydration and electrolyte imbalance. This may result in sunken eyes, cold skin, decreased skin elasticity, and wrinkling of the hands and feet. The dehydration may result in the skin turning bluish. Symptoms start two hours to five days after exposure.
Cholera is caused by a number of types of Vibrio cholerae, with some types producing more severe disease than others. It is spread mostly by water and food that has been contaminated with human feces containing the bacteria. Insufficiently cooked seafood is a common source. Humans are the only animal affected. Risk factors for the disease include poor sanitation, not enough clean drinking water, and poverty. There are concerns that rising sea levels will increase rates of disease. Cholera can be diagnosed by a stool test. A rapid dipstick test is available but is not as accurate.
Prevention involves improved sanitation and access to clean water. Cholera vaccines that are given by mouth provide reasonable protection for about six months. They have the added benefit of protecting against another type of diarrhea caused by E. coli. The primary treatment is oral rehydration therapy—the replacement of fluids with slightly sweet and salty solutions. Rice-based solutions are preferred. Zinc supplementation is useful in children. In severe cases, intravenous fluids, such as Ringer's lactate, may be required, and antibiotics may be beneficial. Testing to see what antibiotic the cholera is susceptible to can help guide the choice.
Cholera affects an estimated 3–5 million people worldwide and causes 58,000–130,000 deaths a year as of 2010. While it is currently classified as a pandemic, it is rare in the developed world. Children are mostly affected. Cholera occurs as both outbreaks and chronically in certain areas. Areas with an ongoing risk of disease include Africa and south-east Asia. While the risk of death among those affected is usually less than 5%, it may be as high as 50% among some groups who do not have access to treatment. Historical descriptions of cholera are found as early as the 5th century BC in Sanskrit. The study of cholera by John Snow between 1849 and 1854 led to significant advances in the field of epidemiology.
- 1 Signs and symptoms
- 2 Cause
- 3 Mechanism
- 4 Diagnosis
- 5 Prevention
- 6 Treatment
- 7 Prognosis
- 8 Epidemiology
- 9 History
- 10 Society and culture
- 11 References
- 12 External links
Signs and symptoms
The primary symptoms of cholera are profuse diarrhea and vomiting of clear fluid. These symptoms usually start suddenly, half a day to five days after ingestion of the bacteria. The diarrhea is frequently described as "rice water" in nature and may have a fishy odor. An untreated person with cholera may produce 10 to 20 litres (3 to 5 US gal) of diarrhea a day. Severe cholera, without treatment, kills about half of affected individuals. If the severe diarrhea is not treated, it can result in life-threatening dehydration and electrolyte imbalances. Estimates of the ratio of asymptomatic to symptomatic infections have ranged from 3 to 100. Cholera has been nicknamed the "blue death" because a person's skin may turn bluish-gray from extreme loss of fluids.
Fever is rare and should raise suspicion for secondary infection. Patients can be lethargic, and might have sunken eyes, dry mouth, cold clammy skin, decreased skin turgor, or wrinkled hands and feet. Kussmaul breathing, a deep and labored breathing pattern, can occur because of acidosis from stool bicarbonate losses and lactic acidosis associated with poor perfusion. Blood pressure drops due to dehydration, peripheral pulse is rapid and thready, and urine output decreases with time. Muscle cramping and weakness, altered consciousness, seizures, or even coma due to electrolyte losses and ion shifts are common, especially in children.
About 100 million bacteria must typically be ingested to cause cholera in a normal healthy adult. This dose, however, is less in those with lowered gastric acidity (for instance those using proton pump inhibitors). Children are also more susceptible, with two- to four-year-olds having the highest rates of infection. Individuals' susceptibility to cholera is also affected by their blood type, with those with type O blood being the most susceptible. Persons with lowered immunity, such as persons with AIDS or children who are malnourished, are more likely to experience a severe case if they become infected. Any individual, even a healthy adult in middle age, can experience a severe case, and each person's case should be measured by the loss of fluids, preferably in consultation with a professional health care provider.[medical citation needed]
The cystic fibrosis genetic mutation known as delta-F508 in humans has been said to maintain a selective heterozygous advantage: heterozygous carriers of the mutation (who are thus not affected by cystic fibrosis) are more resistant to V. cholerae infections. In this model, the genetic deficiency in the cystic fibrosis transmembrane conductance regulator channel proteins interferes with bacteria binding to the gastrointestinal epithelium, thus reducing the effects of an infection.
Cholera is typically transmitted to humans by either contaminated food or water. Most cholera cases in developed countries are a result of transmission by food, while in the developing world it is more often water. Food transmission can occur when people harvest seafood such as oysters in waters infected with sewage, as Vibrio cholerae accumulates in planktonic crustaceans and the oysters eat the zooplankton.
People infected with cholera often have diarrhea, and disease transmission may occur if this highly liquid stool, colloquially referred to as "rice-water", contaminates water used by others. The source of the contamination is typically other cholera sufferers when their untreated diarrheal discharge is allowed to get into waterways, groundwater or drinking water supplies. Drinking any infected water and eating any foods washed in the water, as well as shellfish living in the affected waterway, can cause a person to contract an infection. Cholera is rarely spread directly from person to person.[medical citation needed]
When consumed, most bacteria do not survive the acidic conditions of the human stomach. The few surviving bacteria conserve their energy and stored nutrients during the passage through the stomach by shutting down much protein production. When the surviving bacteria exit the stomach and reach the small intestine, they must propel themselves through the thick mucus that lines the small intestine to reach the intestinal walls where they can attach and thrive.
Once the cholera bacteria reach the intestinal wall they no longer need the flagella to move. The bacteria stop producing the protein flagellin to conserve energy and nutrients by changing the mix of proteins which they express in response to the changed chemical surroundings. On reaching the intestinal wall, V. cholerae start producing the toxic proteins that give the infected person a watery diarrhea. This carries the multiplying new generations of V. cholerae bacteria out into the drinking water of the next host if proper sanitation measures are not in place.[medical citation needed]
The cholera toxin (CTX or CT) is an oligomeric complex made up of six protein subunits: a single copy of the A subunit (part A), and five copies of the B subunit (part B), connected by a disulfide bond. The five B subunits form a five-membered ring that binds to GM1 gangliosides on the surface of the intestinal epithelium cells. The A1 portion of the A subunit is an enzyme that ADP-ribosylates G proteins, while the A2 chain fits into the central pore of the B subunit ring. Upon binding, the complex is taken into the cell via receptor-mediated endocytosis. Once inside the cell, the disulfide bond is reduced, and the A1 subunit is freed to bind with a human partner protein called ADP-ribosylation factor 6 (Arf6). Binding exposes its active site, allowing it to permanently ribosylate the Gs alpha subunit of the heterotrimeric G protein. This results in constitutive cAMP production, which in turn leads to secretion of H2O, Na+, K+, Cl−, and HCO3− into the lumen of the small intestine and rapid dehydration. The gene encoding the cholera toxin was introduced into V. cholerae by horizontal gene transfer. Virulent strains of V. cholerae carry a variant of a temperate bacteriophage called CTXφ.
Microbiologists have studied the genetic mechanisms by which the V. cholerae bacteria turn off the production of some proteins and turn on the production of other proteins as they respond to the series of chemical environments they encounter, passing through the stomach, through the mucous layer of the small intestine, and on to the intestinal wall. Of particular interest have been the genetic mechanisms by which cholera bacteria turn on the protein production of the toxins that interact with host cell mechanisms to pump chloride ions into the small intestine, creating an ionic pressure which prevents sodium ions from entering the cell. The chloride and sodium ions create a salt-water environment in the small intestines, which through osmosis can pull up to six litres of water per day through the intestinal cells, creating the massive amounts of diarrhea. The host can become rapidly dehydrated if an appropriate mixture of dilute salt water and sugar is not taken to replace the blood's water and salts lost in the diarrhea.[medical citation needed]
By inserting separate, successive sections of V. cholerae DNA into the DNA of other bacteria, such as E. coli that would not naturally produce the protein toxins, researchers have investigated the mechanisms by which V. cholerae responds to the changing chemical environments of the stomach, mucous layers, and intestinal wall. Researchers have discovered a complex cascade of regulatory proteins controls expression of V. cholerae virulence determinants.[medical citation needed] In responding to the chemical environment at the intestinal wall, the V. cholerae bacteria produce the TcpP/TcpH proteins, which, together with the ToxR/ToxS proteins, activate the expression of the ToxT regulatory protein. ToxT then directly activates expression of virulence genes that produce the toxins, causing diarrhea in the infected person and allowing the bacteria to colonize the intestine. Current research aims at discovering "the signal that makes the cholera bacteria stop swimming and start to colonize (that is, adhere to the cells of) the small intestine."
Amplified fragment length polymorphism fingerprinting of the pandemic isolates of V. cholerae has revealed variation in the genetic structure. Two clusters have been identified: Cluster I and Cluster II. For the most part, Cluster I consists of strains from the 1960s and 1970s, while Cluster II largely contains strains from the 1980s and 1990s, based on the change in the clone structure. This grouping of strains is best seen in the strains from the African continent.[better source needed]
A rapid dipstick test is available to determine the presence of V. cholerae. In those samples that test positive, further testing should be done to determine antibiotic resistance. In epidemic situations, a clinical diagnosis may be made by taking a patient history and doing a brief examination. Treatment is usually started without or before confirmation by laboratory analysis.
Stool and swab samples collected in the acute stage of the disease, before antibiotics have been administered, are the most useful specimens for laboratory diagnosis. If an epidemic of cholera is suspected, the most common causative agent is V. cholerae O1. If V. cholerae serogroup O1 is not isolated, the laboratory should test for V. cholerae O139. However, if neither of these organisms is isolated, it is necessary to send stool specimens to a reference laboratory.
Infection with V. cholerae O139 should be reported and handled in the same manner as that caused by V. cholerae O1. The associated diarrheal illness should be referred to as cholera and must be reported in the United States.
The World Health Organization recommends focusing on prevention, preparedness, and response to combat the spread of cholera. They also stress the importance of an effective surveillance system. Governments can play a role in all of these areas, and in preventing cholera or indirectly facilitating its spread.
Although cholera may be life-threatening, prevention of the disease is normally straightforward if proper sanitation practices are followed. In developed countries, due to nearly universal advanced water treatment and sanitation practices, cholera is no longer a major health threat. The last major outbreak of cholera in the United States occurred in 1910–1911. Effective sanitation practices, if instituted and adhered to in time, are usually sufficient to stop an epidemic. There are several points along the cholera transmission path at which its spread may be halted:[medical citation needed]
- Sterilization: Proper disposal and treatment of infected fecal waste water produced by cholera victims and all contaminated materials (e.g. clothing, bedding, etc.) are essential. All materials that come in contact with cholera patients should be sanitized by washing in hot water, using chlorine bleach if possible. Hands that touch cholera patients or their clothing, bedding, etc., should be thoroughly cleaned and disinfected with chlorinated water or other effective antimicrobial agents.
- Sewage: antibacterial treatment of general sewage by chlorine, ozone, ultraviolet light or other effective treatment before it enters the waterways or underground water supplies helps prevent undiagnosed patients from inadvertently spreading the disease.
- Sources: Warnings about possible cholera contamination should be posted around contaminated water sources with directions on how to decontaminate the water (boiling, chlorination etc.) for possible use.
- Water purification: All water used for drinking, washing, or cooking should be sterilized by either boiling, chlorination, ozone water treatment, ultraviolet light sterilization (e.g. by solar water disinfection), or antimicrobial filtration in any area where cholera may be present. Chlorination and boiling are often the least expensive and most effective means of halting transmission. Cloth filters or sari filtration, though very basic, have significantly reduced the occurrence of cholera when used in poor villages in Bangladesh that rely on untreated surface water. Better antimicrobial filters, like those present in advanced individual water treatment hiking kits, are most effective. Public health education and adherence to appropriate sanitation practices are of primary importance to help prevent and control transmission of cholera and other diseases.
Handwashing with soap and/or ash is also recommended for cholera prevention by WHO Africa after visiting toilets and before handling food or eating
Surveillance and prompt reporting allow for containing cholera epidemics rapidly. Cholera exists as a seasonal disease in many endemic countries, occurring annually mostly during rainy seasons. Surveillance systems can provide early alerts to outbreaks, therefore leading to coordinated response and assist in preparation of preparedness plans. Efficient surveillance systems can also improve the risk assessment for potential cholera outbreaks. Understanding the seasonality and location of outbreaks provides guidance for improving cholera control activities for the most vulnerable. For prevention to be effective, it is important that cases be reported to national health authorities.
A number of safe and effective oral vaccines for cholera are available. Dukoral, an orally administered, inactivated whole cell vaccine, has an overall efficacy of about 52% during the first year after being given and 62% in the second year, with minimal side effects. It is available in over 60 countries. However, it is not currently recommended by the Centers for Disease Control and Prevention (CDC) for most people traveling from the United States to endemic countries. One injectable vaccine was found to be effective for two to three years. The protective efficacy was 28% lower in children less than 5 years old. However, as of 2010, it has limited availability. Work is under way to investigate the role of mass vaccination. The World Health Organization (WHO) recommends immunization of high-risk groups, such as children and people with HIV, in countries where this disease is endemic. If people are immunized broadly, herd immunity results, with a decrease in the amount of contamination in the environment.
An effective and relatively cheap method to prevent the transmission of cholera is the use of a folded sari (a long cloth garment) to filter drinking water. In Bangladesh this practice was found to decrease rates of cholera by nearly half. It involves folding a sari four to eight times. Between uses the cloth should be rinsed in clean water and dried in the sun to kill any bacteria on it. A nylon cloth appears to work as well.
Continued eating speeds the recovery of normal intestinal function. The World Health Organization recommends this generally for cases of diarrhea no matter what the underlying cause. A CDC training manual specifically for cholera states: “Continue to breastfeed your baby if the baby has watery diarrhea, even when traveling to get treatment. Adults and older children should continue to eat frequently.”
The most common error in caring for patients with cholera is to underestimate the speed and volume of fluids required. In most cases, cholera can be successfully treated with oral rehydration therapy, which is highly effective, safe, and simple to administer. Rice-based solutions are preferred to glucose-based ones due to greater efficiency. In severe cases with significant dehydration, intravenous rehydration may be necessary. Ringer's lactate is the preferred solution, often with added potassium. Large volumes and continued replacement until diarrhea has subsided may be needed. Ten percent of a person's body weight in fluid may need to be given in the first two to four hours. This method was first tried on a mass scale during the Bangladesh Liberation War, and was found to have much success. Despite widespread beliefs, fruit juices and commercial fizzy drinks like cola, are not ideal for rehydration of people with serious infections of the intestines, and the too high sugar content may even harm water uptake.
If commercially produced oral rehydration solutions are too expensive or difficult to obtain, solutions can be made. One such recipe calls for 1 liter of boiled water, 1/2 teaspoon of salt, 6 teaspoons of sugar, and added mashed banana for potassium and to improve taste.
As there frequently is initially acidosis, the potassium level may be normal, even though large losses have occurred. As the dehydration is corrected, potassium levels may decrease rapidly, and thus need to be replaced. This may be done by eating foods high in potassium like bananas or green coconut water.
Antibiotic treatments for one to three days shorten the course of the disease and reduce the severity of the symptoms. Use of antibiotics also reduces fluid requirements. People will recover without them, however, if sufficient hydration is maintained. The World Health Organization only recommends antibiotics in those with severe dehydration.
Doxycycline is typically used first line, although some strains of V. cholerae have shown resistance. Testing for resistance during an outbreak can help determine appropriate future choices. Other antibiotics proven to be effective include cotrimoxazole, erythromycin, tetracycline, chloramphenicol, and furazolidone. Fluoroquinolones, such as ciprofloxacin, also may be used, but resistance has been reported.
In many areas of the world, antibiotic resistance is increasing. In Bangladesh, for example, most cases are resistant to tetracycline, trimethoprim-sulfamethoxazole, and erythromycin. Rapid diagnostic assay methods are available for the identification of multiple drug-resistant cases. New generation antimicrobials have been discovered which are effective against in in vitro studies.
In Bangladesh zinc supplementation reduced the duration and severity of diarrhea in children with cholera when given with antibiotics and rehydration therapy as needed. It reduced the length of disease by eight hours and the amount of diarrhea stool by 10%. Supplementation appears to be also effective in both treating and preventing infectious diarrhea due to other causes among children in the developing world.
If people with cholera are treated quickly and properly, the mortality rate is less than 1%; however, with untreated cholera, the mortality rate rises to 50–60%. For certain genetic strains of cholera, such as the one present during the 2010 epidemic in Haiti and the 2004 outbreak in India, death can occur within two hours of becoming ill.
Cholera affects an estimated 3–5 million people worldwide, and causes 58,000–130,000 deaths a year as of 2010. This occurs mainly in the developing world. In the early 1980s, death rates are believed to have been greater than 3 million a year. It is difficult to calculate exact numbers of cases, as many go unreported due to concerns that an outbreak may have a negative impact on the tourism of a country. Cholera remains both epidemic and endemic in many areas of the world.
Although much is known about the mechanisms behind the spread of cholera, this has not led to a full understanding of what makes cholera outbreaks happen in some places and not others. Lack of treatment of human feces and lack of treatment of drinking water greatly facilitate its spread, but bodies of water can serve as a reservoir, and seafood shipped long distances can spread the disease. Cholera was not known in the Americas for most of the 20th century, but it reappeared towards the end of that century.
The word cholera is from Greek: χολέρα kholera from χολή kholē "bile". Cholera likely has its origins in the Indian subcontinent; it has been prevalent in the Ganges delta since ancient times. Early outbreaks in the Indian subcontinent are believed to have been the result of poor living conditions as well as the presence of pools of still water; both of which are ideal living conditions for cholera to thrive. The disease first spread by trade routes (land and sea) to Russia in 1817, later to the rest of Europe, and from Europe to North America and the rest of the world. Seven cholera pandemics have occurred in the past 200 years, with the seventh pandemic originating in Indonesia in 1961.
The first cholera pandemic occurred in the Bengal region of India starting in 1817 through 1824. The disease dispersed from India to Southeast Asia, China, Japan, the Middle East, and southern Russia. The second pandemic lasted from 1827 to 1835 and affected North American and Europe particularly due to the result of advancements in transportation and global trade, and increased human migration, including soldiers. The third pandemic erupted in 1839, persisted until 1856, extended to North Africa, and reached South America, for the first time specifically infringing upon Brazil. Cholera hit the sub-Saharan African region during the fourth pandemic from 1863 to 1875. The fifth and sixth pandemics raged from 1881–1896 and 1899–1923. These epidemics were less fatal due to a greater understanding of the cholera bacteria. Egypt, the Arabian peninsula, Persia, India, and the Philippines were hit hardest during these epidemics, while other areas, like Germany in 1892 and Naples from 1910–1911, experienced severe outbreaks. The final pandemic originated in 1961 in Indonesia and is marked by the emergence of a new strain, nicknamed El Tor, which still persists today in developing countries.
Since it became widespread in the 19th century, cholera has killed tens of millions of people. In Russia alone, between 1847 and 1851, more than one million people perished of the disease. It killed 150,000 Americans during the second pandemic. Between 1900 and 1920, perhaps eight million people died of cholera in India. Cholera became the first reportable disease in the United States due to the significant effects it had on health. John Snow, in England, was the first to identify the importance of contaminated water as its cause in 1854. Cholera is now no longer considered a pressing health threat in Europe and North America due to filtering and chlorination of water supplies, but still heavily affects populations in developing countries.
In the past, vessels flew a yellow quarantine flag if any crew members or passengers were suffering from cholera. No one aboard a vessel flying a yellow flag would be allowed ashore for an extended period, typically 30 to 40 days. In modern sets of international maritime signal flags, the quarantine flag is yellow and black.
Historically many different claimed remedies have existed in folklore. In the 1854–1855 outbreak in Naples homeopathic Camphor was used according to Hahnemann. While T. J. Ritter's "Mother's Remedies" book lists tomato syrup as a home remedy from northern America. While elecampagne was recommended in the United Kingdom according to William Thomas Fernie
Cholera cases are much less frequent in developed countries where governments have helped to establish water sanitation practices and effective medical treatments. The United States, for example, used to have a severe cholera problem similar to those in some developing countries. There were three large cholera outbreaks in the 1800s, which can be attributed to Vibrio cholerae's spread through interior waterways like the Erie Canal and routes along the Eastern Seaboard. The island of Manhattan in New York City touched the Atlantic Ocean, where cholera collected just off the coast. At this time, New York City did not have as effective a sanitation system as it does today, so cholera was able to spread.
One of the major contributions to fighting cholera was made by the physician and pioneer medical scientist John Snow (1813–1858), who in 1854 found a link between cholera and contaminated drinking water. Dr. Snow proposed a microbial origin for epidemic cholera in 1849. In his major "state of the art" review of 1855, he proposed a substantially complete and correct model for the etiology of the disease. In two pioneering epidemiological field studies, he was able to demonstrate human sewage contamination was the most probable disease vector in two major epidemics in London in 1854. His model was not immediately accepted, but it was seen to be the more plausible, as medical microbiology developed over the next 30 years or so.
Cities in developed nations made massive investment in clean water supply and well-separated sewage treatment infrastructures between the mid-1850s and the 1900s. This eliminated the threat of cholera epidemics from the major developed cities in the world. In 1883, Robert Koch identified V. cholerae with a microscope as the bacillus causing the disease.
Robert Allan Phillips, working at the US Naval Medical Research Unit Two in Southeast Asia, evaluated the pathophysiology of the disease using modern laboratory chemistry techniques and developed a protocol for rehydration. His research led the Lasker Foundation to award him its prize in 1967.
More recently, in 2002, Alam, et al., studied stool samples from patients at the International Centre for Diarrhoeal Disease in Dhaka, Bangladesh. From the various experiments they conducted, the researchers found a correlation between the passage of V. cholerae through the human digestive system and an increased infectivity state. Furthermore, the researchers found the bacterium creates a hyperinfected state where genes that control biosynthesis of amino acids, iron uptake systems, and formation of periplasmic nitrate reductase complexes were induced just before defecation. These induced characteristics allow the cholera vibrios to survive in the "rice water" stools, an environment of limited oxygen and iron, of patients with a cholera infection.
Society and culture
In many developing countries, cholera still reaches its victims through contaminated water sources, and countries without proper sanitation techniques have greater incidence of the disease. Governments can play a role in this. In 2008, for example, the Zimbabwean cholera outbreak was due partly to the government's role, according to a report from the James Baker Institute. The Haitian government’s inability to provide safe drinking water after the 2010 earthquake led to an increase in cholera cases as well.
Similarly, South Africa’s cholera outbreak was exacerbated by the government’s policy of privatizing water programs. The wealthy elite of the country were able to afford safe water while others had to use water from cholera-infected rivers.[not in citation given]
According to Rita R. Colwell of the James Baker Institute, if cholera does begin to spread, government preparedness is crucial. A government's ability to contain the disease before it extends to other areas can prevent a high death toll and the development of an epidemic or even pandemic. Effective disease surveillance can ensure that cholera outbreaks are recognized as soon as possible and dealt with appropriately. Oftentimes, this will allow public health programs to determine and control the cause of the cases, whether it is unsanitary water or seafood that have accumulated a lot of Vibrio cholerae specimens. Having an effective surveillance program contributes to a government’s ability to prevent cholera from spreading. In the year 2000 in the state of Kerala in India, the Kottayam district was determined to be "Cholera-affected"; this pronouncement led to task forces that concentrated on educating citizens with 13,670 information sessions about human health. These task forces promoted the boiling of water to obtain safe water, and provided chlorine and oral rehydration salts. Ultimately, this helped to control the spread of the disease to other areas and minimize deaths. On the other hand, researchers have shown that most of the citizens infected during the 1991 cholera outbreak in Bangladesh lived in rural areas, and were not recognized by the government's surveillance program. This inhibited physicians' abilities to detect cholera cases early.
According to Colwell, the quality and inclusiveness of a country's health care system affects the control of cholera, as it did in the Zimbabwean cholera outbreak. While sanitation practices are important, when governments respond quickly and have readily available vaccines, the country will have a lower cholera death toll. Affordability of vaccines can be a problem; if the governments do not provide vaccinations, only the wealthy may be able to afford them and there will be a greater toll on the country's poor. The speed with which government leaders respond to cholera outbreaks is important.
Besides contributing to an effective or declining public health care system and water sanitation treatments, government can have indirect effects on cholera control and the effectiveness of a response to cholera. A country's government can impact its ability to prevent disease and control its spread. A speedy government response backed by a fully functioning health care system and financial resources can prevent cholera's spread. This limits cholera's ability to cause death, or at the very least a decline in education, as children are kept out of school to minimize the risk of infection.
- Tchaikovsky's death has traditionally been attributed to cholera, most probably contracted through drinking contaminated water several days earlier. "Since the water was not boiled and cholera was affecting Saint Petersburg, such a connection is quite plausible ...." Tchaikovsky's mother died of cholera, and his father became sick with cholera at this time but made a full recovery. Some scholars, however, including English musicologist and Tchaikovsky authority David Brown and biographer Anthony Holden, have theorized that his death was a suicide.
- After the 2010 earthquake, an outbreak swept over Haiti, traced to a United Nations base. This marks the worst cholera outbreak in recent history, as well as the best documented cholera outbreak in modern public health.
Other famous people believed to have died of cholera include:
- Sadi Carnot, Physicist, a founder of thermodynamics (d. 1832)
- Charles X, King of France (d. 1836)
- James K. Polk, eleventh president of the United States (d. 1849)
- Carl von Clausewitz, Prussian soldier and German military theorist (d. 1831)
- Elliot Bovill, Chief Justice of the Straits Settlements (1893)
- Finkelstein, Richard. "Medical microbiology". Retrieved July 2015.
- "Cholera - Vibrio cholerae infection Information for Public Health & Medical Professionals". cdc.gov. January 6, 2015. Retrieved 17 March 2015.
- "Cholera vaccines: WHO position paper." (PDF). Weekly epidemiological record 13 (85): 117–128. Mar 26, 2010. PMID 20349546.
- Harris, JB; LaRocque, RC; Qadri, F; Ryan, ET; Calderwood, SB (30 June 2012). "Cholera.". Lancet 379 (9835): 2466–76. doi:10.1016/s0140-6736(12)60436-x. PMID 22748592.
- Bailey, Diane (2011). Cholera (1st ed.). New York: Rosen Pub. p. 7. ISBN 9781435894372.
- "Sources of Infection & Risk Factors". cdc.gov. November 7, 2014. Retrieved 17 March 2015.
- "Diagnosis and Detection". cdc.gov. February 10, 2015. Retrieved 17 March 2015.
- "Cholera - Vibrio cholerae infection Treatment". cdc.gov. November 7, 2014. Retrieved 17 March 2015.
- Lozano R, Naghavi M, Foreman K, Lim S, Shibuya K, Aboyans V, Abraham J, Adair T, Aggarwal R, Ahn SY; et al. (December 15, 2012). "Global and regional mortality from 235 causes of death for 20 age groups in 1990 and 2010: a systematic analysis for the Global Burden of Disease Study 2010". Lancet 380 (9859): 2095–128. doi:10.1016/S0140-6736(12)61728-0. PMID 23245604.
- "Cholera - Vibrio cholerae infection". cdc.gov. October 27, 2014. Retrieved 17 March 2015.
- Timmreck, Thomas C. (2002). An introduction to epidemiology (3. ed.). Sudbury, Mass.: Jones and Bartlett Publishers. p. 77. ISBN 9780763700607.
- Sack DA, Sack RB, Nair GB, Siddique AK (January 2004). "Cholera". Lancet 363 (9404): 223–33. doi:10.1016/S0140-6736(03)15328-7. PMID 14738797.
- Azman AS, Rudolph KE, Cummings DA, Lessler J (November 2012). "The incubation period of cholera: A systematic review". Journal of Infection 66 (5): 432–438. doi:10.1016/j.jinf.2012.11.013. PMC 3677557. PMID 23201968.
- King AA, Ionides EL, Pascual M, Bouma MJ (August 2008). "Inapparent infections and cholera dynamics". Nature 454 (7206): 877–80. Bibcode:2008Natur.454..877K. doi:10.1038/nature07084. PMID 18704085.
- McElroy, Ann and Patricia K. Townsend. Medical Anthropology in Ecological Perspective. Boulder, CO: Westview, 2009, 375.
- Prevention and control of cholera outbreaks: WHO policy and recommendations, World Health Organization, Regional Office for the Eastern Mediterranean, undated but citing sources from ’07, ’04, ’03, ’04, and ’05.
- Bertranpetit J, Calafell F (1996). "Genetic and geographical variability in cystic fibrosis: evolutionary considerations". Ciba Found. Symp. 197: 97–114; discussion 114–8. PMID 8827370.
- Rita Colwell. Oceans, Climate, and Health: Cholera as a Model of Infectious Diseases in a Changing Environment. Rice University: James A Baker III Institute for Public Policy. Retrieved 2013-10-23.
- Ryan KJ, Ray CG (editors) (2004). Sherris Medical Microbiology (4th ed.). McGraw Hill. pp. 376–7. ISBN 0-8385-8529-9.
- Archivist (1997). "Cholera phage discovery". Arch Dis Child 76 (3): 274. doi:10.1136/adc.76.3.274.
- Almagro-Moreno, S; Pruss, K; Taylor, RK (May 2015). "Intestinal Colonization Dynamics of Vibrio cholerae.". PLOS Pathogens 11 (5): e1004787. doi:10.1371/journal.ppat.1004787. PMID 25996593.
- O'Neal CJ, Jobling MG, Holmes RK, Hol WG (2005). "Structural basis for the activation of cholera toxin by human ARF6-GTP". Science 309 (5737): 1093–6. Bibcode:2005Sci...309.1093O. doi:10.1126/science.1113398. PMID 16099990.
- DiRita VJ, Parsot C, Jander G, Mekalanos JJ (June 1991). "Regulatory cascade controls virulence in Vibrio cholerae". Proc. Natl. Acad. Sci. U.S.A. 88 (12): 5403–7. Bibcode:1991PNAS...88.5403D. doi:10.1073/pnas.88.12.5403. PMC 51881. PMID 2052618.
- [unreliable medical source?] Lan R, Reeves PR (January 2002). "Pandemic Spread of Cholera: Genetic Diversity and Relationships within the Seventh Pandemic Clone of Vibrio cholerae Determined by Amplified Fragment Length Polymorphism". Journal of Clinical Microbiology 40 (1): 172–181. doi:10.1128/JCM.40.1.172-181.2002. ISSN 0095-1137. PMC 120103. PMID 11773113.
- Sack DA, Sack RB, Chaignat CL (August 2006). "Getting serious about cholera". N. Engl. J. Med. 355 (7): 649–51. doi:10.1056/NEJMp068144. PMID 16914700.
- "Laboratory Methods for the Diagnosis of Epidemic Dysentery and Cholera" (PDF). Atlanta, GA: CDC. 1999. Retrieved 2010-02-01.
- "Cholera Fact Sheet", World Health Organization. who.int. Retrieved November 5, 2013.
- "Cholera Kills Boy. All Other Suspected Cases Now in Quarantine and Show No Alarming Symptoms." (PDF). New York Times. July 18, 1911. Retrieved 2008-07-28.
The sixth death from cholera since the arrival in this port from Naples of the steamship Moltke, thirteen days ago, occurred yesterday at Swineburne Island. The victim was Francesco Farando, 14 years old.
- "More Cholera in Port". Washington Post. October 10, 1910. Retrieved 2008-12-11.
A case of cholera developed today in the steerage of the Hamburg-American liner Moltke, which has been detained at quarantine as a possible cholera carrier since Monday last. Dr. A.H. Doty, health officer of the port, reported the case tonight with the additional information that another cholera patient from the Moltke is under treatment at Swinburne Island.
- Cholera and food safety - Regional Office for Africa http://www.afro.who.int/index.php?option=com_docman&task=doc_download&gid=1712 Seen 18th Dec. 2015
- "Cholera: prevention and control". Health topics. WHO. 2008. Retrieved 2008-12-08.
- Sinclair D, Abba K, Zaman K, Qadri F, Graves PM (2011). Sinclair D, ed. "Oral vaccines for preventing cholera". Cochrane Database Syst Rev (3): CD008603. doi:10.1002/14651858.CD008603.pub2. PMID 21412922.
- "Is a vaccine available to prevent cholera?". CDC disease info: Cholera. 2010-10-22. Retrieved 2010-10-24.
- Graves PM, Deeks JJ, Demicheli V, Jefferson T (2010). Graves PM, ed. "Vaccines for preventing cholera: killed whole cell or other subunit vaccines (injected)". Cochrane Database Syst Rev (8): CD000974. doi:10.1002/14651858.CD000974.pub2. PMID 20687062.
- "Cholera vaccines". Health topics. WHO. 2008. Retrieved 2010-02-01.
- Ramamurthy T (2010). Epidemiological and Molecular Aspects on Cholera. Springer. p. 330. ISBN 9781603272650.
- Merrill RM (2010). Introduction to epidemiology. (5th ed.). Sudbury, Mass.: Jones and Bartlett Publishers. p. 43. ISBN 9780763766221.
- Starr C (2007). Biology: Today and Tomorrow with Physiology (2 ed.). Cengage Learning. p. 563. ISBN 9781111797010.
- THE TREATMENT OF DIARRHOEA, A manual for physicians and other senior health workers, World Health Organization, 2005. See page 10 (14 in PDF) and esp chapter 5; "MANAGEMENT OF SUSPECTED CHOLERA", pages 16-17 (20-21 in PDF).
- Community Health Worker Training Materials for Cholera Prevention and Control, CDC, slides at back are dated 11/17/2010. See esp pages 7-8.
- The Civil War That Killed Cholera, foreignpolicy.com.
- 22nd April 2009: Sugary drinks worsen stomach 'bug'. BBC News, Health. citing National Institute for Health and Clinical Excellence. Seen 19th Dec. 2015 http://news.bbc.co.uk/2/hi/health/8010346.stm
- "Oral Rehydration Solutions: Made at Home". The Mother and Child Health and Education Trust. 2010. Retrieved 2010-10-29.
- "First steps for managing an outbreak of acute diarrhea" (PDF). World Health Organization Global Task Force on Cholera Control. Retrieved November 23, 2013.
- Cholera Treatment (Report). Centers for Disease Control and Prevention (CDC). November 28, 2011.
- "Cholera treatment". Molson Medical Informatics. 2007. Retrieved 2008-01-03.
- Krishna BV, Patil AB, Chandrasekhar MR (March 2006). "Fluoroquinolone-resistant Vibrio cholerae isolated during a cholera outbreak in India". Trans. R. Soc. Trop. Med. Hyg. 100 (3): 224–6. doi:10.1016/j.trstmh.2005.07.007. PMID 16246383.
- Mackay IM (editor) (2007). Real-Time PCR in microbiology: From diagnosis to characterization. Caister Academic Press. ISBN 978-1-904455-18-9.
- Ramamurthy T (2008). "Antibiotic resistance in Vibrio cholerae". Vibrio cholerae: Genomics and molecular biology. Caister Academic Press. ISBN 978-1-904455-33-2.
- Leibovici-Weissman, Y; Neuberger, A; Bitterman, R; Sinclair, D; Salam, MA; Paul, M (19 June 2014). "Antimicrobial drugs for treating cholera.". The Cochrane database of systematic reviews 6: CD008625. doi:10.1002/14651858.CD008625.pub2. PMID 24944120.
- Cholera-Zinc Treatment (Report). Centers for Disease Control and Prevention (CDC). November 28, 2011.
- Telmesani AM (May 2010). "Oral rehydration salts, zinc supplement and rota virus vaccine in the management of childhood acute diarrhea". Journal of family and community medicine 17 (2): 79–82. doi:10.4103/1319-1683.71988. PMC 3045093. PMID 21359029.
- Todar K. "Vibrio cholerae and Asiatic Cholera". Todar's Online Textbook of Bacteriology. Retrieved 2010-12-20.
- Presenter: Richard Knox (10 December 2010). "NPR News". Morning Edition. NPR.
- Reidl J, Klose KE (June 2002). "Vibrio cholerae and cholera: out of the water and into the host". FEMS Microbiol. Rev. 26 (2): 125–39. doi:10.1111/j.1574-6976.2002.tb00605.x. PMID 12069878.
- Blake PA (1993). "Epidemiology of cholera in the Americas". Gastroenterology clinics of North America 22 (3): 639–60. PMID 7691740.
- Rosenberg, Charles E. (1987). The cholera years: the United States in 1832, 1849 and 1866. Chicago: University of Chicago Press. ISBN 0-226-72677-0.
- "Cholera's seven pandemics". CBC News. October 22, 2010.
- McNeil J. Something New Under The Sun:An Environmental History of the Twentieth Century World (The Global Century Series).
- Aberth, John (2011). Plagues in World History. Lanham, MD: Rowman & Littlefield. p. 102. ISBN 978-0-7425-5705-5.
- Kelley Lee (2003) "Health impacts of globalization: towards global governance". Palgrave Macmillan. p.131. ISBN 0-333-80254-3
- Geoffrey A. Hosking (2001). "Russia and the Russians: a history". Harvard University Press. p.9. ISBN 0-674-00473-6
- Byrne JP (2008). Encyclopedia of Pestilence, Pandemics, and Plagues: A-M. ABC-CLIO. p. 99. ISBN 0-313-34102-8.
- J. N. Hays (2005). "Epidemics and pandemics: their impacts on human history". p.347. ISBN 1-85109-658-2
- Sehdev PS (November 2002). "The origin of quarantine". Clin. Infect. Dis. 35 (9): 1071–2. doi:10.1086/344062. PMID 12398064.
- www.legatum.sk, The American Homoeopathic Review Vol. 06 No. 11-12, 1866, pages 401-403
- "Cholera Infantum, Tomatoes Will Relieve". October 13, 2008.
- "Cholera", World Health Organization. who.int
- Pyle GF (2010). "The Diffusion of Cholera in the United States in the Nineteenth Century". Geographical Analysis 1: 59–75. doi:10.1111/j.1538-4632.1969.tb00605.x. PMID 11614509.
- Lacey SW (1995). "Cholera: calamitous past, ominous future". Clin. Infect. Dis. 20 (5): 1409–19. doi:10.1093/clinids/20.5.1409. PMID 7620035.
- Charles E. Rosenberg (2009). The Cholera Years the United States in 1832, 1849, and 1866. Chicago: University of Chicago Press. p. 74. ISBN 9780226726762.
- Fillipo Pacini (1854) "Osservazioni microscopiche e deduzioni patologiche sul cholera asiatico" (Microscopic observations and pathological deductions on Asiatic cholera), Gazzetta Medica Italiana: Toscana, 2nd series, 4 (50) : 397-401 ; 4 (51) : 405-412.
- Reprinted (more legibily) as a pamphlet.
- Dr John Snow, The mode of communication of cholera, 2nd ed. (London, England: John Churchill, 1855).
- Aberth,John. Plagues in World History. Lanham, MD: Rowman & Littlefield, 2011, 101.
- "Albert Lasker Clinical Medical Research Award". Lasker Foundation. Retrieved January 7, 2014.
- Merrell DS, Butler SM, Qadri F, Dolganov NA, Alam A, Cohen MB, Calderwood SB, Schoolnik GK, Camilli A (June 2002). "Host-induced epidemic spread of the cholera bacterium". Nature 417 (6889): 642–5. Bibcode:2002Natur.417..642M. doi:10.1038/nature00778. PMC 2776822. PMID 12050664.
- "Cholera vaccines. A brief summary of the March 2010 position paper" (PDF). World Health Organization. Retrieved September 19, 2013.
- Walton DA, Ivers LC (2011). "Responding to cholera in post-earthquake Haiti". N. Engl. J. Med. 364 (1): 3–5. doi:10.1056/NEJMp1012997. PMID 21142690.
- Pauw J (2003). "The politics of underdevelopment: metered to death-how a water experiment caused riots and a cholera epidemic". Int J Health Serv 33 (4): 819–30. doi:10.2190/kf8j-5nqd-xcyu-u8q7. PMID 14758861.
- John TJ, Rajappan K, Arjunan KK (2004). "Communicable diseases monitored by disease surveillance in Kottayam district, Kerala state, India". Indian J. Med. Res. 120 (2): 86–93. PMID 15347857.
- Siddique AK, Zaman K, Baqui AH, Akram K, Mutsuddy P, Eusof A, Haider K, Islam S, Sack RB (June 1992). "Cholera epidemics in Bangladesh: 1985-1991". J Diarrhoeal Dis Res 10 (2): 79–86. PMID 1500643.
- DeRoeck D, Clemens JD, Nyamete A, Mahoney RT (2005). "Policymakers' views regarding the introduction of new-generation vaccines against typhoid fever, shigellosis and cholera in Asia". Vaccine 23 (21): 2762–2774. doi:10.1016/j.vaccine.2004.11.044. PMID 15780724.
- Pruyt, Eric (26 July 2009). "Cholera in Zimbabwe" (PDF). Delft University of Technology.
- Kapp C (February 2009). "Zimbabwe's humanitarian crisis worsens". Lancet 373 (9662): 447. doi:10.1016/S0140-6736(09)60151-3. PMID 19205080.
- Brown, Man and Music, 430–32; Holden, 371; Warrack, Tchaikovsky, 269–270.
- Meumayr A (1997). Music and medicine: Chopin, Smetana, Tchaikovsky, Mahler: Notes on their lives, works, and medical histories. Med-Ed Press. pp. 282–3. (summarizing various theories on what killed the composer Tchaikovsky, including his brother Modest's idea that Tchaikovsky drank cholera-infested water the day before he became ill).
- David Brown, Early Years, 46.
- Holden, 23.
- Brown, Man and Music, 431–35; Holden, 373–400.
- Asimov, Isaac (1982), Asimov's Biographical Encyclopedia of Science and Technology (2nd rev. ed.), Doubleday
- Susan Nagel, Marie Thérèse: Child of Terror, p. 349-350.
- Haynes SW (1997). James K. Polk and the Expansionist Impulse. New York: Longman. p. 191. ISBN 978-0-673-99001-3.
- Smith, Rupert, The Utility of Force, Penguin Books, 2006, page 57
- The Singapore Free Press and Mercantile Advertiser, 25 March 1893, Page 2
|Wikimedia Commons has media related to Cholera.|
|Look up Cholera in Wiktionary, the free dictionary.|
- Cholera—World Health Organization
- Cholera - ’’Vibrio cholerae’’ infection—Centers for Disease Control and Prevention | https://en.wikipedia.org/wiki/Cholera |
4.53125 | HTML: A Gentle Introduction
HyperText Markup Language (HTML) is a simple language for representing document format styles and links to other documents or media types, such as images or sound recordings. HTML can be used to create documents which contain styles such as underlined or bold-faced text. It can also be used to mix text, images, and sounds into a single document, the individual elements of which may be located on geographically distant systems around the world.
HTML is designed to create documents for the World Wide Web and it helps determine what is displayed when you are browsing documents with your favorite WWW browser. You can use it to create your home, or welcome page or to create a research document, article, or book. This document can then be viewed locally or made accessible for viewing by other Web surfers through the use of a WWW server, such as NCSA's or CERN's httpd daemon. This article introduces you to HTML basics so you may get started creating your own HTML documents.
You can use your favorite text editor to create an HTML document. The document will be composed of text, which will be displayed directly to the user, and markup tags, which are used to modify the appearance of the text or to incorporate images or sounds as part of the document. Tags are also used when referencing other documents or different locations within a document. Document references are called hypertext links or simply “links”.
A tag for indicating the start of a particular format is represented as a tag name enclosed in a pair of angle brackets. To indicate the termination of a format, the tag name is prefixed with a /. For instance, <I>Italics</I> would display the word “Italics” in italic format. Let's examine a simple HTML document.
<HEAD> <TITLE>Sample Document</TITLE> </HEAD> <BODY> <H1>A Sample HTML Document</H1> Here is some <B>Bold text</B>, and here is some <I>Italic text</I>. </BODY>
This makes use of a few basic tags. The text between the <HEAD> and </HEAD> tags is the document header. The header contains a <TITLE> tag which indicates the start of the document title tag and is terminated by a </TITLE>. The title usually isn't displayed as part of the document text by most browsers, but instead is displayed in a special location. For instance, Mosaic displays the title in the title box at the top of the browser window.
After the header is the body of the document, which is contained between the <BODY> and </BODY> tags. Inside the body the <H1> represents the start of a first level document heading. There are six levels of headings. Each increase in level results in a decrease in the prominence with which a heading is displayed. For instance, you might want to use an H1 heading for displaying the document title in the document text, and then use H2 for subheadings.
This is probably a good time to mention that tags are case-insensitive. Thus <TITLE> and <title> are the same tag; however, I will continue to capitalize document tags for clarity.
Physical format styles are used to indicate the specific physical appearance with which to display text. The following is a list of physical format tags:
Displays text in bold face.
Displays text in italics.
Displays text underlined.
Displays text using a typewriter font.
The problem with physical formats is that there is no guarantee that a particular browser will display the text as expected. A user may modify the fonts that a browser uses, or the browser may not even have the specified font style available. For instance, if a text mode browser is used to display a document, it is unlikely that italic text can be displayed at all. To avoid the ambiguity associated with the display of physical formats, you may use logical format tags. In fact, it is usually recommended that you use logical format tags, in preference to physical format tags, wherever you can.
Practical books for the most technical people on the planet. Newly available books include:
- Agile Product Development by Ted Schmidt
- Improve Business Processes with an Enterprise Job Scheduler by Mike Diehl
- Finding Your Way: Mapping Your Network to Improve Manageability by Bill Childers
- DIY Commerce Site by Reven Lerner
Plus many more.
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Unikernels, Docker, and Why You Should Care
- Happy GPL Birthday VLC!
- Handheld Emulation: Achievement Unlocked!
- Controversy at the Linux Foundation
- Giving Silos Their Due
- Don't Burn Your Android Yet
- Wine 1.8 Released
- Firefox OS | http://www.linuxjournal.com/article/1081?quicktabs_1=2 |
4.125 | In early August 2011, Congress and President Obama made a last-minute deal to avoid a default on the U.S. debt, but as the deadline to raise the debt ceiling approached, many people wondered what would happen if the government failed to reach an agreement. Some believed a default would send the global economy spiraling, while others said the consequences would be insignificant.
If the U.S. government didn't make a deal to raise the debt ceiling by August 2, 2011, credit rating agencies, such as Moody's and Standard & Poor's, warned that they would downgrade the country's current AAA-rating. Treasury bonds and other asset classes would have felt the impact of a downgrade. Around $130 billion in municipal bonds would have been downgraded and all other bonds would have come under review. The financial sector, which had previously assumed that the government could bail out major banks if they ran into trouble, would not have that safety net if the government was broke. A default would also have put a strain on the U.S. and its creditors. In Treasury bonds, U.S. dollars account for a large percentage of global bank reserves. For example, China holds about $1 trillion in U.S. Treasury bonds. The value of China's holdings would have decreased if the U.S. defaulted.
The biggest issue with the default was the uncertainty. No one knew for sure just how much it would affect the already shaky U.S. economy or the global market, mainly because the U.S. hadn't been in the situation before. Some economists argue that the U.S. has defaulted twice on its debt: in 1790, when the federal government restructured bonds issued to fund the Revolutionary War; and in 1933, when Congress passed a bill making it illegal for creditors to demand payment in physical gold. However, even though both incidents were technically defaults, the creditors involved were paid. Therefore neither situation could shed much light on the possible U.S. default in August 2011.
Late in the evening of July 31, 2011, President Obama and Congressional leaders reached an agreement that would raise the debt ceiling by $2.4 trillion in two stages. They also agreed that $2.4 trillion in spending cuts would be made over the next decade and $900 billion in cuts would be made immediately. The House approved the plan on August 1st. The Senate approved it on the following day. A default was narrowly avoided. A bipartisan Congressional supercommittee was appointed and given until Thanksgiving to find $1.2 trillion in deficit reductions.
In November 2011, the Congressional supercommittee failed to agree on what programs to cut after more than 10 weeks of negotiations. Therefore, $1.2 trillion will automatically be cut from military spending, education, transportation, and Medicare. The committee's failure also set the stage for Congress to spend the next year battling over which programs would receive deeper cuts.
Throughout 2011, states also struggled with the stalled economy. Minnesota Democrats and Republicans failed to agree on a solution for the state's budget problems, and the government shut down over the summer. State employees were sent home without pay. Parks, historical sites, the Minnesota Zoo, and all major rest areas along highways were among the many state services closed. In November, Alabama's most populous county, Jefferson, filed the largest municipal bankruptcy in U.S. history. Jefferson County was over $4 billion in debt.
By December 2011, the federal unemployment rate decreased to 8.6%, its lowest level in two and a half years. While the debt crisis in Europe grew worse in the last days of 2011, the U.S. economy appeared to be on the mend.
However, even with a steadier economy, the U.S. Government found itself facing another default in the fall of 2013. On October 1, 2013, Congress failed to agree on a budget and pass a spending bill, causing the government to shut down. The failure to pass a bill was largely due to a standoff over the Affordable Care Act, also known as Obamacare. Already feeling pressure from the partial shutdown, Congress began tense negotiations in an effort to pass a budget by the debt ceiling deadline on October 17, 2013.
Some Americans felt the impact of the 2013 shutdown more than others. The partial shutdown meant that unemployment, social security and Medicare benefits would not be interrupted. The mail service would continue. Federal air traffic controllers and airport security screeners would still report to work. However, all national parks and Smithsonian museums closed. People seeking government backed mortgages and loans could experience delays. Active military personnel, about 1.4 million people, would stay on duty, but their paychecks would be delayed. Health and safety inspectors would stop workplace inspections except in emergency situations. Overall, the government shutdown forced about 800,000 federal workers off the job.
As the October 17 debt ceiling deadline approached, the shutdown continued while Congress scrambled to find an 11-hour fiscal deal. Much like the situation in 2011, Congress came through at the last minute. On October 16, 2013, the night before the debt ceiling deadline, both the House and Senate approved a bill to fund the government until January 15, 2014, and raise the debt limit through February 7, 2014. The last minute bill avoided a default and ended the 16-day government shutdown. It also ended the Republican standoff with President Obama over the Affordable Care Act.
With a new federal budget needed by early 2014, the stage was set for another Congressional standoff and possible default in just a few weeks. Hoping to avoid that, Obama spoke shortly after the Senate passed the latest bill. He urged Congress to move ahead to the next budget negotiation, "We've got to get out of the habit of governing by crisis. We could get all these things done even this year, if everybody comes together in a spirit of, how are we going to move this country forward and put the last three weeks behind us?"
|Math and Money| | http://www.factmonster.com/business/default.html |
4.40625 | Josh wants to show his pen pal where he lives in relation to his school. Josh lives three blocks from his school. In fact, he lives three blocks South of his school. In order to coordinate where Josh lives in relation to his school, Josh has decided to graph the location of his school on a coordinate grid.
Do you know how to do this?
This Concept is about graphing ordered pairs in four quadrants. You will learn how to do this in this Concept.
Way back in an earlier Concept, you learned how to graph points on a coordinate grid. This coordinate grid only had one quadrant or section to it. This was necessary at the time because you didn’t know about integers yet. Here is a picture of the coordinate grid with only one quadrant.
Now let’s think back to that Concept and review some of the vocabulary associated with coordinate grids and graphing points.
Now if we are going to plot a point on the coordinate grid pictured above, we will have an coordinate and a coordinate. We go across the axis to the value and then up to the value and that is where we plot the point.
Plot (3, 5) on the coordinate grid then label it point
Now we have point (3, 5) graphed on the coordinate grid.
But this isn’t the only coordinate grid! Now that you know about integers, we can see all four quadrants of the coordinate grid. While in the past we only graphed points in one quadrant, there are actually FOUR quadrants to the coordinate grid. Let’s take a look.
Here you can see all four quadrants of the coordinate grid. If you look at each axis, you will see that there are positive and negative values on each axis. The axis has positive values to the right of the origin, and negative values to the left of the origin. The axis has positive values above the origin and negative values below the origin. We can plot points in all four quadrants.
How can we graph points in all four quadrants?
We can work on this in the same way that we did when we had only one quadrant. We use ordered pairs. There will be an value and a value in the ordered pair. The value can be positive or negative and the value can be positive or negative. We start at the origin, move to the value and then to the value. Then we can graph the point.
Graph the point (-4, 3) and name it point .
Here we started at the origin. Worked our way to the left to negative four on the axis then worked our way up to positive three on the axis. That is where we graphed point .
Practice Identifying each ordered pair on the Coordinate grid.
Example A, B, C, D
Solution: A = (1,1), B = (-3,-1), C = (0, 4), D = (2, -3)
Here is the original problem once again.
Josh wants to show his pen pal where he lives in relation to his school. Josh lives three blocks from his school. In fact, he lives three blocks south of his school. In order to coordinate where Josh lives in relation to his school, Josh has decided to graph the location of his school on a coordinate grid.
Do you know how to do this?
To accomplish this goal, Josh drew a coordinate grid like this one.
He wants to graph his school three blocks south of his home.
To do this, Josh put his home at the origin which has the coordinates (0,0).
Then if Josh goes three blocks south of his school, we can put it at (0,3).
These are the coordinates of Josh's school.
Here are the vocabulary words in this Concept.
- the four sections of a coordinate grid
- the place where the and axis’ meet at (0, 0)
- Ordered Pair
- the and values used to locate points on a coordinate grid
- the horizontal axis on the coordinate grid
- the vertical axis on the coordinate grid
- the and values of an ordered pair
Here is one for you to try on your own.
Identify the coordinates of the following point. Use an coordinate grid to help you.
Begin at the origin. Move five units to the right of the origin and three units down. Where are you?
If we begin at the origin, that has the coordinates of (0,0).
We move 5 units to the right on the x axis that is +5.
We move 3 units down, that is -3.
Our answer is (5, -3).
Here is a video for review.
Directions: Identify the coordinates of each of the points plotted on the coordinate grid.
Directions: Answer the following questions.
11. What is the center point called?
12. What are it's coordinates?
13. If you move to the right of the origin, are the values positive or negative?
14. What is the horizontal line called?
15. What is the vertical line called? | http://www.ck12.org/algebra/Ordered-Pairs-in-Four-Quadrants/lesson/Ordered-Pairs-in-Four-Quadrants/r16/ |
4.15625 | Who were the Vikings?
A Viking is one of the Norse (Scandinavian) explorers, warriors, merchants, and pirates who raided and colonized wide areas of Europe from the late eighth to the early eleventh century. These Norsemen used their famed longships to travel as far east as Constantinople and the Volga River in Russia, and as far west as Iceland, Greenland, and Newfoundland. This period of Viking expansion is known as the Viking Age, and forms a major part of the medieval history of Scandinavia, the British Isles and Europe in general.
The period from the earliest recorded raids in the 790s until the Norman Conquest of England in 1066 is commonly known as the Viking Age of Scandinavian History. The Normans, however, were descended from Danish Vikings who were given feudal overlordship of areas in northern France — the Duchy of Normandy — in the 8th century. In that respect, descendants of the Vikings continued to have an influence in northern Europe. Likewise, King Harold Godwinson, the last Anglo-Saxon king of England who was killed during the Norman invasion in 1066, was descended from Danish Vikings. Many of the medieval kings of Norway and Denmark were married to English and Scottish royalty and Viking forces were often a factor in dynastic disputes prior to 1066.
Generally speaking, the Norwegians expanded to the north and west to places such as Ireland, Iceland and Greenland; the Danes to England and France, settling in the Danelaw (northern England) and Normandy; and the Swedes to the east. These nations, although distinct, were similar in culture and language. The names of Scandinavian kings are known only for the later part of the Viking Age, and only after the end of the Viking Age did the separate kingdoms acquire a distinct identity as nations, which went hand in hand with their Christianization. Thus the end of the Viking Age for the Scandinavians also marks the start of their relatively brief Middle Ages.
In 2000, the PBS in the United States presented a two hour feature program called "The Vikings". Visit the site
LastUpdate: 2015-11-30 12:53:05 | http://www.danishnet.com/vikings/who-were-vikings/ |
4.125 | - freely available
Biology 2013, 2(2), 693-701; doi:10.3390/biology2020693
Abstract: Cyanobacteria and lichens living under sandstone surfaces in the McMurdo Dry Valleys require snow for moisture. Snow accumulated beyond a thin layer, however, is counterproductive, interfering with rock insolation, snow melting, and photosynthetic access to light. With this in mind, the facts that rock slope and direction control colonization, and that climate change results in regional extinctions, can be explained. Vertical cliffs, which lack snow cover and are perpetually dry, are devoid of organisms. Boulder tops and edges can trap snow, but gravity and wind prevent excessive buildup. There, the organisms flourish. In places where snow-thinning cannot occur and snow drifts collect, rocks may contain living or dead communities. In light of these observations, the possibility of finding extraterrestrial endolithic communities on Mars cannot be eliminated.
The Ross Desert, an unofficial geographic name referring to high-altitude (>1000 m) areas of the McMurdo Dry Valleys, is one of coldest environments on Earth. Here, the air temperature does not rise much above 0 °C in the peak of summer . The year-round low temperatures create a secondary challenge for life: low water activity, or high aridity. While snow—the only form of precipitation in the region—falls regularly during the summer months, most of the snow is either blown away or sublimates without melting. Together, these two extremes—low temperatures and high aridity—create a desert environment where life is restricted to a few protected niches. Pioneering work by Imre Friedmann and his colleagues showed that the interior of sandstone is one such niche, occupied by cryptoendolithic cyanobacteria and lichens [2,3]. During the summer, the rocks are warmed by solar radiation or insolation, intermittently reaching temperatures high enough to melt snow and support biological activity [1,4,5]. In addition, the sandstones are translucent, especially when wet, with the outer centimeter of the rock, where the organisms reside, receiving 0.1%–1% of incident sunlight . The organisms also actively improve the optical properties of the surrounding sandstone by leaching iron from it [3,7].
The Ross Desert cryptoendoliths do not reside under all available sandstone surfaces, and they don′t survive under all rock surfaces or at all locations. On Mount Fleming, for example, the community is mostly dead and fossilized . Early researchers attributed the absence of life in these locations to the absence of warm temperatures. North-facing slopes, which receive direct solar radiation and are, therefore, warm, are nearly always colonized . In contrast, south-facing slopes, which receive less insolation, are generally devoid of colonization. Taking this logic further, it was suggested that minor changes in temperature during periods of glaciation and global cooling can cause the endolithic community in an entire region to go extinct .
In this article, I present an alternative hypothesis, which emphasizes the volume of snow that a rock surface actually receives, or the effective snow condition. In an extreme cold climate where snow is the sole moisture source, photosynthetic microorganisms living within rocks are faced with unique ecological challenges. For instance, snow, unlike rain, cannot wet vertical surfaces. Hence, cliffs are perpetually dry. At the other extreme, a rock can be covered by too much snow. Under a thick snow cover, a rock may no longer receive sufficient insolation to reach temperatures high enough to melt snow. In addition, the light level within the rock may no longer be adequate to support photosynthesis. An ideal effective snow condition occurs on rocks that can trap some snow, but where gravity or frequent gusty winds can prevent excessive buildup. As shown below, all biological variations on Battleship Promontory, which were previously attributed to temperature, can be explained by variations in effective snow condition.
In light of these new observations, the generally-held notion that the surface of Mars is too cold to support extant life [7,8,9,10] should be revisited. Given the recent evidence that suitable rock types, frost formation, and conditions for stable liquid water all occur on Mars, in equatorial lowlands, the possibility of finding living endolithic microorganisms there cannot be eliminated.
2. Results and Discussion
2.1. Battleship Promontory: Correlation Between Biology and Snow
On Battleship Promontory (76°55′S, 161°58′E, elevation 1294 m), in the Convoy Range, sandstone rocks vary widely in size and shape, from outcrops tens of meters across, to boulders a few meters high, and to small stones forming a part of the rubble field (Figure 1). An opportunity to observe the effective snow condition presented itself during a field trip in late January, 2005. Following a significant snowfall, the snow covering the rocks was drastically re-arranged by wind.
Direct contact with snow is not always necessary for a rock to be colonized, and the presence of moisture is not the sole criterion for colonization. For instance, the feet of boulders and stones in loose rubble fields—kept moist by contact with damp soil—are uniformly colonized. Endolithic organisms can also exist within the lower surface of a thin overhang, apparently sustained by downward movement of moisture penetrating the upper surface. In the Dry Valleys, where winds frequently gust up to 15 meters per second, mostly from the southeast , some sandstone surfaces are heavily abraded and undergo grain-by-grain disintegration . Under these conditions, slow-growing endolithic organisms are unable to establish a foothold.
These special situations aside, contact with snow is essential for colonization. Hence, vertical cliffs, which cannot trap snow, are devoid of organisms. This is true for both north- and south-facing cliffs (Figure 1). These “abiotic” surfaces are covered by a relatively uniform dark red coating . This coating is the consequence, not the cause, of the rock′s abiotic condition. Where such surfaces have access to moisture, for example, if they lie next to a colonized corner, the coating is destroyed by biological activity and recedes (Figure 1).
Moderately-sloped surfaces at the tops of boulders have the ideal effective snow condition. They can trap some snow, but excess snow either falls off or is blown away by strong winds. As a result, snow covers on these surfaces are thin, especially around the edges (Figure 2). North- and south-facing slopes are equally well colonized, suggesting that snow, not temperature, controls where the organisms can or cannot exist. The presence of microorganisms under these surfaces was confirmed both in the field (Figure 3) and by examining returned samples using scanning electron microscopy (Figure 4). These relatively dry surfaces are colonized primarily by the lichen-dominated community, while the permanently moist rocks on the ground generally harbor cyanobacteria.
Perhaps the strongest evidence that snow, not temperature, controls colonization on Battleship Promontory comes from flat, horizontal surfaces. Despite uniform insolation, these surfaces are not always uniformly colonized. Where the colonization is not uniform, it is correlated with the distribution of snow. It seems that the organisms prefer less, not more snow. An example of this observation is shown in Figure 5. This sandstone slab, situated in the lee of a boulder, was partially covered by 8–10 cm of snow (Figure 5a). The photographs in Figure 5b and 5c show a bird’s-eye view of the slab before and after the snow was deliberately removed for observation. The snow-free outer edge is actively colonized, but the snow-covered area shows little evidence of biological activity.
The colonized sandstone rocks on Battleship Promontory can be divided into two categories. First, there are surfaces elevated above the ground. Due to gravity- and wind-assisted snow removal, the effective snow condition of these rocks stays relatively constant and optimal regardless of snowfall volume. These communities appear to be all viable. Second, there are surfaces at ground level and surfaces in a topographic low (e.g., gullies), where snow removal cannot occur and drift snow accumulates. On these surfaces, the effective snow condition can vary considerably and change through time. A surface that is favorable in a climate with low annual precipitation may become unfavorable in a climate with high annual precipitation, and vice versa. In an extreme environment, where the organisms grow slowly , extinction occurs relatively quickly, but re-colonization would be slow. As a result, repeated episodes of colonization, death, and re-colonization may occur in these rocks.
2.2. Cause of Death on Mount Fleming and Horseshoe Mountain: Climate Cooling or Fluctuations in Precipitation?
Mount Fleming and Horseshoe Mountain, in the Asgard Range, are two sites where sandstone outcrops contain mostly dead and entirely dead microbial communities, respectively. Relative to Battleship Promontory, these sites are located farther south, slightly more inland, and at a higher elevation (2200 m). Accordingly, they have a colder climate. The mean January air temperature on Battleship Promontory is −16.6 °C. In contrast, the values for Mount Fleming and Horseshoe Mountain are −18.7 °C and −22.4 °C, respectively . Based on these data, Friedmann and his colleagues concluded that the cold limit separating hostile from life-supporting environments runs roughly through the area of Mount Fleming . This conclusion was based on the assumption that, at those locations, the communities went extinct because the temperatures no longer rose sufficiently to melt whatever snow there was. In light of the effective snow condition, it is possible that the Mount Fleming and Horseshoe Mountain communities went extinct from changes in annual precipitation, not from low temperature. The landscapes on Mount Fleming and Horseshoe Mountain are relatively flat, with little opportunity to trap snow. During periods of relatively high precipitation, it is possible that exposed surfaces were covered by a thin film of snow that permitted the rocks to warm up to a degree sufficient to produce meltwater. During drier periods with less snow, however, whatever snow there was could have been more effectively removed by the wind, thereby reducing the overall period of metabolic activity and increasing the probability of the death of the community. In this case, the isolated occurrence of colonies on Mount Fleming could be a sign of recovery of the ecosystem during what is now a wetter period.
2.3. Extraterrestrial Endolithic Microorganisms on Mars?
The surface of Mars is generally considered uninhabitable because of its low atmospheric pressure, somewhat less than 10 mbars. This pressure is below the triple point of water and so does not allow for the presence of stable liquid water. Numerical calculations by Lobitz and colleagues indicate, however, that this may not be the case across the entire planet . Specifically, in low-lying regions between equator and 40°N, temperature and pressure conditions for stable liquid water may occur during summer months. In Utopia Planitia, favorable conditions may last for up to one third of the year. Frost formation in this region is well-documented by images returned by the Viking 2 lander (Figure 6). Furthermore, recent missions indicate that soil sulfate and gypsum, which are suitable for colonization by endolithic organisms , are widespread on Mars . Until we definitively establish the cold limit of life on Earth, the possibility that rocks in Utopia Planitia contain live microorganisms cannot be eliminated.
3. Experimental Section
Field surveys of biological activity relied on macroscopic biosignatures visible on the rock surface. Where possible, observations were documented by photography. The following protocol was used to prepare samples for electron microscopy. Specimens were rehydrated in saline phosphate buffer and then fixed in 0.5% formaldehyde and 1% glutaraldehyde for 15 minutes. Fixed specimens were dehydrated in an ethanol series: 15%, 30%, 50%, 75%, 95%, 100%, each for 15 minutes. After two additional changes in anhydrous ethanol, the specimens were placed in hexamethyldisilazane (HMDS) twice, each time for 30 minutes. After the second wash, the HMDS was decanted, and the specimens were air-dried. Specimens were carbon-coated and viewed using a scanning electron microscope (JSM-5610).
On Battleship Promontory, colonization barriers of microbial communities under sandstone surfaces are imposed by the uneven distribution of snow, not temperature. The presence of dead communities under some rock surfaces may be attributed to fluctuations in annual precipitation. On Mount Fleming and Horseshoe Mountain, the communities may have died during a period of climatic cooling or during a period when changes in annual precipitation caused the effective snow condition to become unfavorable. On Mars, extant endolithic communities may exist in equatorial lowlands.
Thanks to P. Conrad and R. Carlson for leading the field expedition, to C. McKay for discussions, To J. Nienow and R. Kreidberg for editing, to L. Wable for graphic assistance, and to three anonymous reviewers for comments that improved the manuscript. This work was in part supported by a grant from the NASA Astrobiology Program (NNX08AO45G).
- Friedmann, E.I.; McKay, C.P.; Nienow, J.A. The cryptoendolithic microbial environment in the Ross Desert of Antarctica: Satellite-transmitted continous nanoclimate data, 1984–1986. Polar Biol. 1987, 7, 273–287. [Google Scholar] [CrossRef]
- Friedmann, E.I.; Ocampo, R. Endolithic blue-green algae in the dry valleys: Primary producers in the Antarctic desert ecosystem. Science 1976, 193, 1247–1249. [Google Scholar]
- Friedmann, E.I. Endolithic microorganisms in the Antarctic cold desert. Science 1982, 215, 1045–1053. [Google Scholar]
- Friedmann, E.I. Melting snow in the dry valleys is a source of water for endolithic microorganisms. Antarctic J. 1978, 13, 162–163. [Google Scholar]
- McKay, C.P.; Friedmann, E.I. The cryptoendolithic microbial environment in the Antarctica cold desert: Temperature variations in nature. Polar Biol. 1985, 4, 19–25. [Google Scholar] [CrossRef]
- Nienow, J.A.; McKay, C.P.; Friedmann, E.I. The cryptoendolithic microbial environment in the Ross Desert of Antarctica: light in the photosynthetically active region. Microb. Ecol. 1988, 16, 271–289. [Google Scholar]
- Friedmann, E.I.; Weed, R. Microbial trace-fossil formation, biogenous, and abiotic weathering in the Antarctic cold desert. Science 1987, 236, 703–705. [Google Scholar]
- Friedmann, E.I.; Druk, A.Y.; McKay, C.P. Limits of life and microbial extinction in the antarctic desert. Antarctic J. 1994, 29, 176–179. [Google Scholar]
- Friedmann, E.I. The Antarctic cold desert and the search for traces of life on Mars. Adv. Space Res. 1986, 6, 265–268. [Google Scholar] [CrossRef]
- McKay, C.P.; Friedmann, E.I.; Wharton, R.A.; Davies, W.L. History of water on Mars: A biological perspective. Adv. Space Res. 1992, 12, 231–238. [Google Scholar]
- Sun, H.J.; Friedmann, E.I. Growth on geological time scales in the Antarctic cryptoendolithic microbial community. Geomicrobiol. J. 1999, 16, 193–202. [Google Scholar] [CrossRef]
- Lobitz, B.; Wood, B.L.; Averner, M.M.; McKay, C.P. Use of spacecraft data to derive regions on Mars where liquid water would be stable. Proc. Natl. Acad. Sci. USA. 2001, 98, 2132–2137. [Google Scholar] [CrossRef]
- Dong, H.; Rech, J.A.; Jiang, H.; Sun, H.; Buck, B.J. Endolithic cyanobacteria in soil gypsum: Occurrences in Atacama (Chile), Mojave (USA), and Al-Jafr Basin (Jordan) Deserts. J. Geophys. Res. 2007, 112, G02030. [Google Scholar]
- Gendrin, A.; Mangold, N.; Bibring, J.P.; Langevin, Y.; Gondet, B.; Poulet, F.; Bonello, G.; Quantin, C.; Mustard, J.; Arvidson, R.; LeMouélic, S. Sulfates in Martian layered terrains: The OMEGA/Mars Express view. Science 2005, 307, 1587–1591. [Google Scholar] [CrossRef]
© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). | http://www.mdpi.com/2079-7737/2/2/693/htm |
4.28125 | You Are Here
Easy drama exercises
- Pass around an ordinary object such as a pencil or pen; have each child imagine it is something else and show us without words what it has become.
- Have the class stand in a circle. Move across the circle with emotion such as joy or anger, give emotion to another person. That person takes the emotion into circle then changes it to something else and passes it to the next person. Always choose someone who hasn't participated yet.
- At a bus station, church social hour, or other large-group situation, hand each child a card with a description of a person and what that person is thinking or feeling. When the children enter the group, they become that person. No one else knows what identity they have been given. This activity could be used with specific situation from curriculum.
- Perform choral readings. The leader writes out a poem or reading with parts for all the children. Repeat certain words for emphasis, echo others, or have two children or all children read the same phrase. Read the piece through once or twice to practice.
- Any story from the curriculum can be changed into a small play for the children to act out. Keep the story simple, and have parts for everyone.
- Read a story, then have the children become their favorite part of the story. They could be the chair a person sat in, or a character from the story, or the proclamation that was read. Repeat the story asking the children to stop the story when their part comes, and have them jump into the story.
- When you read a story, have the children figure out the parts they liked the best or where they felt the same as the character. Have them tell the story their way. | http://www.uua.org/re/tapestry/resources/arts/chapter7/120856.shtml |
4.125 | A delegate is someone who communicates the ideas of or acts on behalf of an organization at a meeting or conference between organizations of the same level.
- A member of a House of Delegates, either at a national or constituent state level (as in several US states)
Delegate is the title of a person elected to the United States House of Representatives to serve the interests of an organized United States territory, at present only overseas or the District of Columbia, but historically in most cases in a portion of North America as precursor to one or more of the present states of the union. Delegates have powers similar to that of Representatives, including the right to vote in committee, but have no right to take part in the floor votes in which the full house actually decides whether the proposal is carried. See: Delegate (United States Congress).
A similar mandate is held in a few cases under the style Resident commissioner.
- Delegate is also the title given to individuals elected to the lower houses of the bicameral legislative bodies of the states of Maryland, Virginia and West Virginia (see House of Delegates).
- Members of other parliamentary assemblies, such as the Continental Congress or the New York State Constitutional Convention.
- Members of a body charged with writing or revising a foundational or other basic governmental document (such as members of a constitutional convention are usually referred to as "delegates".
Pledged delegates are elected or chosen at the state or local level, with the understanding that they will support a particular candidate at the convention. Pledged delegates are, however, not actually bound to vote for that candidate, thus the candidates are allowed to periodically review the list of delegates and eliminate any of those they feel would not be supportive. Currently there are 3,253 pledged delegates.
Of the 4,047 total Democratic delegates, 794 are superdelegates, which are usually Democratic members of Congress, governors, former Presidents, and other party leaders. They are not required to indicate preference for a candidate.
The Democratic Party uses a proportional representation to determine how many delegates each candidate is awarded in each state. For example, a candidate who wins 40% of a state's vote in the primary election will win 40% of that state's delegates. However, a candidate must win at least 15% of the primary vote in order to receive any delegates. There is no process to win superdelegates, since they can vote for whomever they please. A candidate needs to win a simple majority of total delegates to earn the Democratic nomination.
The Republican Party utilizes a similar system with slightly different terminology, employing pledged and unpledged delegates. Of the total 2,380 Republican delegates (2,286 in 2012), 1,719 are pledged delegates, who as with the Democratic Party, are elected at the state or local level. To become the Republican Party nominee, the candidate must win a simple majority of 1,191 of the 2,380 total delegates at the Republican National Convention.
A majority of the unpledged delegates are elected much like the pledged delegates, and are likely to be committed to a specific candidate. Many of the other unpledged delegates automatically claim the delegate status either by virtue of their position as a party chair or national party committee person. This group is known as unpledged RNC member delegates.
The process by which delegates are awarded to a candidate will vary from state to state. Many states use a winner-take-all system, where popular vote determines the winning candidate for that state. However, beginning in 2012 many states now use proportional representation. While the Republican National Committee does not require a 15% minimum threshold, individual state parties may impart such a threshold.
The unpledged RNC member delegates are free to vote for any candidate and are not bound by the electoral votes of their state. The majority of the unpledged delegates (those who are elected or chosen) are technically free to vote for any candidate. However, they are likely to be committed to one specifically.
|Look up delegate in Wiktionary, the free dictionary.|
- Apostolic delegate, one appointed by the Pope, notably as the diplomatic equivalent of an envoy extraordinary, or sometimes as papal governor
- Pontifical Delegate, one appointed to represent the Pope, in various functions | https://en.wikipedia.org/wiki/Delegate |
4.25 | You Are Here
Alternate Activity 3: Ethics GPS
Activity time: 10 minutes
Preparation for Activity
- Decide whether to do this activity in a full group or in smaller groups.
Description of Activity
Ask participants to imagine what an ethics GPS would be like and then to demonstrate how it would work.
Begin by asking whether participants are familiar with GPS, or the Global Positioning System. (Be assured that many youth will have tried GPS in school and summer programs, if not at home or in family cars.) Explain, if participants do not, that GPS uses satellites to track things on Earth: to show where they are, what speed they are traveling, and what direction they are moving. If you have GPS in your car, you tell it where you want to go by keyboarding an address or a telephone number, or, in some cases, by speaking aloud to it. The system then tells you how to get to your destination. A voice gives instructions as you drive, saying things like this: "At the next intersection turn left."
Continue by asking what an ethics GPS system would be like. Could it help people make the right choices and stay on a virtuous path?
Let the group propose its answers as a full group, if you like. Or, if the group is large, divide it into smaller groups and let each come up with answers to compare at the end of the activity.
Use a few questions like these to get things started: What sort of destination would people want to reach? Who would decide how people should move, and what they should do? What would the ethical decisions be based on?
Ask for volunteers from each small group or from the full group to act out the way their ideas would work. One youth might describe an ethical destination, say, reaching a decision about a specific problem, such as whether to steal bread to feed a hungry child. Another youth could give GPS directions in a computer-like voice. A script might begin like this:
DECISION-MAKER: How do I reach a decision?
GPS: Ask your parent for advice.
DECISION-MAKER: My parent is not home. (Or did not have an answer.) Where do I go next?
Here are some possibilities to offer if discussion is slow to begin: For a destination, some people might select heaven; others might select a happy life or a specific work goal. People who could decide how people should move might include judges, religious leaders, or teachers. The ethical decisions could be based on the law or the Bible or some other set of rules. | http://www.uua.org/re/tapestry/children/grace/session14/115595.shtml |
4.1875 | Colombia HistoryEdit This Page
From FamilySearch Wiki
During colonial times present-day Colombia was part of the viceroyalty of Peru. However, due to it’s distance from Lima and the geographical isolation of Santa Fe (Bogotá), it remained relatively independent. In 1739, it became the Viceroyalty of New Granada with additional surrounding territory including Panama and Venezuela. It’s only export was gold, and otherwise was a subsistence economy.
After Napoleon deposed of the Spanish king in 1808, Colombia along with Venezuela established the independent country of the United Provinces of New Granada. This only lasted till 1816, when Spain once again took control, but under the military leadership of Simón Bolívar it was re-liberated in 1819 along with Panama and Ecuador, and renamed Gran Colombia. Francisco Santander, the very liberal vice-president, had direct control over Gran Colombia till 1828 when the mostly absent Bolívar named himself dictator after clashing with Santander over reforms and rebellions in Venezuela. By 1830, Gran Colombia fell apart, and Colombia along with Panama formed the Republic of Colombia. The Santander/Bolívar division remains important as it germinated the now two-party system of Liberals and Conservatives.
The Republic of New Granada, which lasted from 1830-1858, began under the presidency of Santander who watered down many of his earlier reforms but maintained a highly centralized form of government. The War of the Supremes of 1839 to 1841, was an attempt by the very dis-unified provinces to change the tightly controlled Republic to a federation. From 1853--with a constitutional reform--regional governments gained increasing independence, until the continued press for federalism led to the creation of Grenadine Confederation in 1858. This lack of national unity--exacerbated by their geographical isolation--continually played a large role in the internal violence that has plagued Colombia throughout it’s existence. In fact by 1860, the country was again in a state of civil war, with the Liberals triumphing in1862 and then forming the United States of Colombia in 1863, where each state basically ruled itself and the central government had almost no power. Radical liberal reforms (including allowing civil marriages, divorce, and non-religious educational institutions) begun during the Republic escalated, causing huge altercations with the Catholic church.
Conservatives with the support of the Catholic church gained control in 1887, renamed the country the Republic of Colombia, and started rescinding the liberal reforms. This complicated family organization as previous divorces and remarriages were no longer legally recognized. Liberals, who were denied any participation in government, rebelled in 1899 and started the Thousand Days’ War, a very bloody war which ended with little benefit to anyone in 1902. Conservatives, albeit with greater power sharing with the Liberals, remained in power till 1930.
The power change in 1930 to a Liberal president, resulted in outbreaks of fighting in many remote areas which foreshadowed later violence to come. With the assassination of the popular Liberal presidential candidate Jorge Eliécer Gaitán in 1948, rioting in Bogotá spread to outlying areas in what came to be know as La Violencía. The brutal and barbaric violence led to the formation of many guerrilla bands, including Communist factions, which remained even after the end of La Violencía in the early 1960’s. A mild, but unpopular dictatorship (formed by a military coup to in an attempt to help control the violence) was replaced in 1958 by the National Front, a equal power sharing coalition between the Liberal and Conservative Parties that remained in place till 1978. While this did serve to end the violence, this exclusion and alienation of other parties led to the acceptance that violence was the only way to change the system. Rapid urbanization during this time also occurred as rural people moved to escape the fighting.
Many radical, violent guerrilla organizations formed during the 1970’s and 1980’s in Colombia as a result. Also, drug cartels started developing enormous power in Cali and Medellín. Attempts to mediate peace failed and urban terrorism, political violence, corruption, and kidnapping regularly occurred. In 1991 a new constitution was ratified with hopes that it would help. But the 90’s proved to be the most violent since La Violencía. Efforts of eradication and mediation met with little success as underlying causes of economic inequity and poverty remained substantial. Mass emigration, partially due to the violence and partially to economic reasons, peaked in 2000, which became known as the Colombian diaspora. During the 2000’s, some headway was made, but criminal groups still have a strong presence in roughly one/third of Colombia.
Despite all the conflict, Colombia remains unique from other Latin American countries in that is has had one of the most stable governments of all. While coups and dictatorships arose, they were rare, short-lived and moderate. Also, while economic growth was never extraordinary, it always managed to miss the extreme ups and downs of it’s neighbors, and remain fairly positive. Colombia has always had and continues to have the strongest Catholic church presence, and Bogotá developed the nickname of “The Athens of South America” due to its academics.
All of the internal conflicts in Colombia affected the people and records. Churches and records have been destroyed, and population displacement occurred. Preservation of documents in a national archives has only began since 1991, as government attention and resources are and were absorbed by the conflicts.
- English wiki entry on Colombia's history
- Spanish wiki entry on Colombia's history
- Colombia - History
- "Historia de Colombia para la Enseñanza Secundaria," a Google eBook, published in 1920
- "The Republic of Colombia," a Google eBook, published in 1906
- "Colombia. Comprising its Geography, History, and Topography," a Google eBook, published in 1833
- "Historia de la República de Colombia," a Google eBook, published in 1827
- This page was last modified on 20 October 2015, at 01:08.
- This page has been accessed 1,760 times.
Future Changes to the Wiki
Changes are coming to the FamilySearch Research Wiki in the near future. Find out more on the Wiki Community News page.Community News | https://familysearch.org/learn/wiki/en/Colombia_History |
4.1875 | 5 Written questions
5 Matching questions
- linguistic relativity hypothesis
- working backwards
- a The order of words in a language.
- b Eliminates false starts and can only be used when the end goal is clearly specified.
- c Meaningful units of language that make up words.
- d Cognitive strategies used as shortcuts to solve complex mental tasks; they do not guarantee a correct solution.
- e Language determines thought.
5 Multiple choice questions
- Inability to perceive a new use for an object associated with a different purpose.
- Problem-solving procedures or formulas that guarantee a correct outcome if correctly applied.
- Underusing a word.
- A knowledge cluster or general framework that provides expectations about topics, events, objects, people, and situations in one's life.
- Tendency, after learning about an event, to believe that one could have predicted the event in advance.
5 True/False questions
language → The order of words in a language.
natural concepts → Concepts that represent objects and events.
concepts → Mental representations of categories of items or ideas, based on experience.
script → A cluster of knowledge about sequences of events and actions expected to occur in particular settings.
overextension → Underusing a word. | https://quizlet.com/9458393/test |
4.1875 | Reading schematics can be difficult for beginners since there are many symbols. This guide is intended to help people with any background in electronics (including none) become able to read and understand an electronic circuit schematic diagram. My advice is to start out with small simple schematics, eventually progressing to more complicated schematics as you become more comfortable with the symbology.
To read a schematic, you require no understanding of how a circuit actually works, although it might help with troubleshooting. However, you must be able to realize what component each symbol represents and how to connect them.
In a schematic, all components should be clearly labeled with a part number or size. For resistors, the size in Ohms should be labeled, while capacitors should have the size in Farads labeled. If a component is not marked, it may have been missed by the drawer of the schematic or it may not matter what size resistor is put in place.
Wires in schematics are represented by lines. Connections between wires are represented by dots on the point where the lines intersect. If two wires cross, but one has a curve over the intersection, it means that the two wires aren't connected and only cross to make drawing the schematic easier.
Switches are usually classified into two types: momentary and toggle switches. A toggle switch is a line with part of it removed and pointing outwards. A momentary switch is two broken lines with a third parallel line to the side of it, representing a push button.
Battery cells are represented by two parallel lines, with one shorter than the other. In some schematics you will see only one battery cell, while others will put more than one in a row. The number of cells is usually arbitrary since the voltage is clearly marked. It should also be noted that the longer line represents the positive terminal of the battery.
Resistors are either represented by 3-4 triangular waves or a rectangular block. A potentiometer or variable resistor is the same, only with an arrow pointing perpendicular to the standard resistor.
Unpolarized capacitors are represented by two lines of equal length perpendicular to the wires that enter them. Polarized capacitors are either the same as an unpolarized capacitor with a positive polarity indicator or one of the lines may be replaced by a curved line, which represents the negative polarity.
Inductors are represented by numerous semicircles in a row, representing turns of wire. If there are two lines to the side of the inductor, it indicates that it is wrapped around a ferrous material, such as an iron bar. Transformers are two or more inductors next to the same parallel lines. With transformers, the number of loops can either be labeled or represented by the number of "loops" in each inductor component.
Transistors are represented by a line perpendicular to the wire, with two additional lines coming out of the opposing side of the line. Often, one of the lines has an arrow to indicate that it is the collector or emitter. In many cases, the transistor is also encircled.
Diodes are represented by a line perpendicular to the wire with a small arrow indicating the polarity. Light emitting diodes (LEDs) are the same as diodes, only with two or more arrows coming off, indicating light. LED symbols are often encircled to further differentiate them from standard diodes.
DC voltage supplies are represented by circles with positive and negative markings to indicate polarity. DC current sources are represented by circles with an arrow representing the direction of current flow. AC power supplies are represented by a circle containing a sine, square, or sawtooth wave.
Earth grounds are represented by a series of three lines perpendicular to the wire with decreasing size.
An integrated circuit is usually represented by a triangle with markings indicating which wires connect to which pins. In some cases, the entire integrated circuit will be drawn in as a rectangle with pins labeled. | http://www.freeinfosociety.com/article.php?id=5 |
4.375 | The Bill of Rights
First 10 Amendments of the US Constitution
Introduced by James Madison and First US Congress in 1789
Limits the power of the federal government of the US, protecting all citizens, residents and visitors on US territory. Protects:
Freedom of speech, religion
The right to keep and bear arms
Freedom of assembly, petition
Prohibits unreasonable search and seizure, cruel and unusual punishment, and compelled self-incrimination
The Constitution – Generally
1. Separation of Powers: separation at the national level that creates checks and balances which are designed to prevent any one branch from becoming too powerful. 2. Federalism: Simultaneous federal/national and state/local governments; 2 levels of sovereignty operating at the same time over the people (viable national government that can behave effectively for all of the people, yet the benefits of diversity and decentralization).
In looking at any question regarding government action, consider the following: 1. What part of government gets to take what actions?
2. Is the government branch taking the action constitutionally permitted to do so?
Functions of the Constitution:
Describes the responsibilities of the centralized government Enumerates powers to be exercised
Protects the rights of the individual
Separation of Power—creates a national government; separates power among 3 branches—division of power was designed to create a system of checks and balances. BLUEPRINT Federalism—Divides power between the federal and state government: Supremacy Clause sets up a hierarchical relationship b/w the federal government and the statesstate and local laws are void if they conflict with federal law. 10th Amendment: “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” Federalism limits the ability of states to impose burdens on each other (ex: states can’t regulate/tax commerce in a manner that places an undue burden on interstate commerce.) (3) Protects individual liberties
The Constitution’s protections of individual liberties apply ONLY to the government (private conduct generally doesn’t have to comply with the Constitution—except the 13th Amend with prohibits slavery applies directly to private behavior.) The due process clause of the 14th Amendment made Bill of Rights apply to states.
Methods for Changing the Constitution:
Amendment: requires 2/3 vote of Congress to propose the amendment, becomes effective when ratified by 3/4 of the states. (all 27 amendments of the Constitution were done this way) 3 major types: amendments that overrule specific Supreme Court decisions, amendments to correct problems in the original Constitution, and amendments to reflect changes in social attitudes. Other type: amendments to expand/change electoral process.
Constitutional Convention: 2/3 of the states could call for Congress to convene a constitutional convention and any amendments proposed would then have to get ¾ of the states to agree in order for it to be ratified (this method has never been used!)
Article I: Legislative Branch
§ 1: grants Congress legislative powers
§ 8: list powers of Congress
Clause 1: Spending Clause
Clause 3: Commerce Clause
Clause 18: Necessary and Proper Clause
§ 9: limits the power of Congress
§ 10: limits the power of the States (no State shall enter into or pass any law impairing the obligations of contracts)
Article II: Executive Branch and the President
§ 1: grants President executive power
§ 2: lists the President’s powers
Clause 1: identifies President as the Commander in Chief
Clause 2: power to make treaties with the consent of the Senate
Article III: Judicial Branch
§ 1: judicial power vest with the Supreme Court and in lower courts... | http://www.studymode.com/course-notes/Constitutional-Law-Outline-45307162.html |
4 | New Mexico Emigration and ImmigrationEdit This Page
From FamilySearch Wiki
The earliest non-Indian settlers of New Mexico were the 130 Hispanic families who came into the upper Rio Grande Valley in 1598. At the time of the Pueblo revolt of 1680, the New Mexico Spanish population was about 2,500. By 1817, just before Mexican independence, the Spanish population of New Mexico had reached 27,000.
After the United States took control in 1848, immigrants from Mexico settled in the north central part of the state. In the 1900s there has been a heavy Hispanic emigration to other states, especially California.
The influx of Anglo-Americans first began about 1850, when the Santa Fe Trail was used by many on their way to the California gold fields. The eastern third of New Mexico was settled after the Civil War by Protestants from Texas. The southwestern corner attracted miners from other states after the coming of the railroads in the 1880s. Colorado ranchers and Mormon colonists (after 1876) settled the San Juan Valley in the northwest corner of the state.
There has been no port of entry common to settlers of New Mexico. For information on passenger lists, see the United States Research Outline. The first laws restricting immigration across the Mexican border were enacted in 1903. Records of Mexican border crossings from about 1903 to the mid-1900s are located at the National Archives.
Records of a few ethnic groups such as Italians and Hispanic-Americans are listed in the Family History Library Catalog under NEW MEXICO - MINORITIES. Many records of American Indians are listed under the same heading. Also see Indians of New Mexico
Mexican Border Crossing Records
Numerous Mexicans came to New Mexico in the late 19th and early 20th century. Records of 20th century Mexican border crossings are available at the National Archives and Family History Library. These include:
- Columbus, New Mexico, alphabetical manifests 1917-1954
- see also Texas: El Paso, Fabens, Fort Hancock, Ysleta | https://familysearch.org/learn/wiki/en/index.php?title=New_Mexico_Emigration_and_Immigration&oldid=79509 |
4.21875 | Multiplication Teacher Resources
Find Multiplication educational ideas and activities
Showing 1 - 20 of 10,075 resources
Patterns in the Multiplication Table
Explore patterns in the multiplication table in order to deepen your third graders' understanding of this essential skill. Implement this activity as a whole-class lesson plan, allowing young scholars to work in pairs or small groups to...
3rd - 4th Math CCSS: Designed
I Have...Who Has...Multiplication Game
Get the whole class involved in practicing their multiplication facts with this fun collaborative activity. With each child given a card containing both a product and an unrelated multiplication sentence, the activity begins as the child...
3rd - 6th Math CCSS: Adaptable
Numbers in a Multiplication Table
Identifying patterns is a crucial skill for all mathematicians, young and old. Explore the multiplication table with your class, using patterns and symmetry to teach about square numbers, prime numbers, and the commutative and identity...
3rd - 5th Math CCSS: Designed
Catch the Monkeys Multiplication Game
Young mathematicians master their multiplication facts with the help of this fun math game. As they roll a die and move around the included game board, children must identify the missing factor for each multiplication sentence they land...
3rd - 4th Math CCSS: Adaptable
Number & Operations: Multi-Digit Multiplication
A set of 14 lessons on multiplication would make a great learning experience for your fourth grade learners. After completing a pre-assessment, kids work through lessons that focus on multiples of 10, double-digit multiplication, and...
4th Math CCSS: Adaptable
Understand Multiplication Problems: Matching Equations to Real-World Examples
Strengthen the problem solving skills of young mathematicians by teaching them to identify multiplication word problems with the fourth video of this series. After introducing two guiding questions to ask when solving story problems, the...
4 mins 2nd - 4th Math CCSS: Designed | http://www.lessonplanet.com/lesson-plans/multiplication |
4.1875 | |This article needs additional citations for verification. (September 2010)|
An electronic oscillator is an electronic circuit that produces a periodic, oscillating electronic signal, often a sine wave or a square wave. Oscillators convert direct current (DC) from a power supply to an alternating current signal. They are widely used in many electronic devices. Common examples of signals generated by oscillators include signals broadcast by radio and television transmitters, clock signals that regulate computers and quartz clocks, and the sounds produced by electronic beepers and video games.
Oscillators are often characterized by the frequency of their output signal:
- A low-frequency oscillator (LFO) is an electronic oscillator that generates a frequency below ≈20 Hz. This term is typically used in the field of audio synthesizers, to distinguish it from an audio frequency oscillator.
- An audio oscillator produces frequencies in the audio range, about 16 Hz to 20 kHz.
- An RF oscillator produces signals in the radio frequency (RF) range of about 100 kHz to 100 GHz.
Oscillators designed to produce a high-power AC output from a DC supply are usually called inverters.
The most common form of linear oscillator is an electronic amplifier such as a transistor or op amp connected in a feedback loop with its output fed back into its input through a frequency selective electronic filter to provide positive feedback. When the power supply to the amplifier is first switched on, electronic noise in the circuit provides a non-zero signal to get oscillations started. The noise travels around the loop and is amplified and filtered until very quickly it converges on a sine wave at a single frequency.
- In an RC oscillator circuit, the filter is a network of resistors and capacitors. RC oscillators are mostly used to generate lower frequencies, for example in the audio range. Common types of RC oscillator circuits are the phase shift oscillator and the Wien bridge oscillator.
- In an LC oscillator circuit, the filter is a tuned circuit (often called a tank circuit; the tuned circuit is a resonator) consisting of an inductor (L) and capacitor (C) connected together. Charge flows back and forth between the capacitor's plates through the inductor, so the tuned circuit can store electrical energy oscillating at its resonant frequency. There are small losses in the tank circuit, but the amplifier compensates for those losses and supplies the power for the output signal. LC oscillators are often used at radio frequencies, when a tunable frequency source is necessary, such as in signal generators, tunable radio transmitters and the local oscillators in radio receivers. Typical LC oscillator circuits are the Hartley, Colpitts and Clapp circuits.
- In a crystal oscillator circuit the filter is a piezoelectric crystal (commonly a quartz crystal). The crystal mechanically vibrates as a resonator, and its frequency of vibration determines the oscillation frequency. Crystals have very high Q-factor and also better temperature stability than tuned circuits, so crystal oscillators have much better frequency stability than LC or RC oscillators. Crystal oscillators are the most common type of linear oscillator, used to stabilize the frequency of most radio transmitters, and to generate the clock signal in computers and quartz clocks. Crystal oscillators often use the same circuits as LC oscillators, with the crystal replacing the tuned circuit; the Pierce oscillator circuit is also commonly used. Quartz crystals are generally limited to frequencies of 30 MHz or below. Other types of resonator, dielectric resonators and surface acoustic wave (SAW) devices, are used to control higher frequency oscillators, up into the microwave range. For example, SAW oscillators are used to generate the radio signal in cell phones.
Negative resistance oscillator
In addition to the feedback oscillators described above, which use two-port amplifying active elements such as transistors and op amps, linear oscillators can also be built using one-port (two terminal) devices with negative resistance, such as magnetron tubes, tunnel diodes, lambda diodes and Gunn diodes. Negative resistance oscillators are usually used at high frequencies in the microwave range and above, since at these frequencies feedback oscillators perform poorly due to excessive phase shift in the feedback path.
In negative resistance oscillators, a resonant circuit, such as an LC circuit, crystal, or cavity resonator, is connected across a device with negative differential resistance, and a DC bias voltage is applied to supply energy. A resonant circuit by itself is "almost" an oscillator; it can store energy in the form of electronic oscillations if excited, but because it has electrical resistance and other losses the oscillations are damped and decay to zero. The negative resistance of the active device cancels the (positive) internal loss resistance in the resonator, in effect creating a resonator with no damping, which generates spontaneous continuous oscillations at its resonant frequency.
The negative resistance oscillator model is not limited to one-port devices like diodes; feedback oscillator circuits with two-port amplifying devices such as transistors and tubes also have negative resistance. At high frequencies, transistors and FETs do not need a feedback loop, but with certain loads applied to one port can become unstable at the other port and show negative resistance due to internal feedback, causing them to oscillate. So high frequency oscillators in general are designed using negative resistance techniques.
Some of the many harmonic oscillator circuits are listed below:
|Triode vacuum tube||~1 GHz|
|Bipolar transistor (BJT)||~20 GHz|
|Heterojunction Bipolar Transistor (HBT)||~50 GHz|
|Metal Semiconductor Field Effect Transistor (MESFET)||~100 GHz|
|Gunn diode, fundamental mode||~100 GHz|
|Magnetron tube||~100 GHz|
|High Electron Mobility Transistor (HEMT)||~200 GHz|
|Klystron tube||~200 GHz|
|Gunn diode, harmonic mode||~200 GHz|
|IMPATT diode||~300 GHz|
|Gyrotron tube||~300 GHz|
- Armstrong oscillator
- Clapp oscillator
- Colpitts oscillator
- Cross-coupled oscillator
- Dynatron oscillator
- Hartley oscillator
- Meissner oscillator
- Opto-electronic oscillator
- Pierce oscillator
- Phase-shift oscillator
- Robinson oscillator
- Tri-tet oscillator
- Vackář oscillator
- Wien bridge oscillator
A nonlinear or relaxation oscillator produces a non-sinusoidal output, such as a square, sawtooth or triangle wave. It consists of an energy-storing element (a capacitor or, more rarely, an inductor) and a nonlinear switching device (a latch, Schmitt trigger, or negative resistance element) connected in a feedback loop. The switching device periodically charges and discharges the energy stored in the storage element thus causing abrupt changes in the output waveform.
Square-wave relaxation oscillators are used to provide the clock signal for sequential logic circuits such as timers and counters, although crystal oscillators are often preferred for their greater stability. Triangle wave or sawtooth oscillators are used in the timebase circuits that generate the horizontal deflection signals for cathode ray tubes in analogue oscilloscopes and television sets. They are also used in voltage controlled oscillators (VCOs), inverters and switching power supplies, dual slope analog to digital converters (ADCs), and in function generators to generate square and triangle waves for testing equipment. In general, relaxation oscillators are used at lower frequencies and have poorer frequency stability than linear oscillators.
Ring oscillators are built of a ring of active delay stages. Generally the ring has an odd number of inverting stages, so that there is no single stable state for the internal ring voltages. Instead, a single transition propagates endlessly around the ring.
Some of the more common relaxation oscillator circuits are listed below:
Voltage-controlled oscillator (VCO)
An oscillator can be designed so that the oscillation frequency can be varied over some range by an input voltage or current. These voltage controlled oscillators are widely used in phase-locked loops, in which the oscillator's frequency can be locked to the frequency of another oscillator. These are ubiquitous in modern communications circuits, used in filters, modulators, demodulators, and forming the basis of frequency synthesizer circuits which are used to tune radios and televisions.
Radio frequency VCOs are usually made by adding a varactor diode to the tuned circuit or resonator in an oscillator circuit. Changing the DC voltage across the varactor changes its capacitance, which changes the resonant frequency of the tuned circuit. Voltage controlled relaxation oscillators can be constructed by charging and discharging the energy storage capacitor with a voltage controlled current source. Increasing the input voltage increases the rate of charging the capacitor, decreasing the time between switching events.
One of the first electronic oscillators was an oscillating arc built by Elihu Thomson in 1892. Thomson's oscillator placed an LC tuned circuit in parallel with the arc, used metal electrodes, and included a magnetic blowout. Independently in the same year, George Francis Fitzgerald realized that if the damping resistance in a resonant circuit could be made zero or negative, it would produce oscillations, and tried unsuccessfully to build a negative resistance oscillator with a dynamo, what would now be called a parametric oscillator. The arc oscillator was rediscovered and popularized by William Duddell in 1900. Electric arcs were used to provide illumination in the 19th century, but the arc current was unstable and they often produced hissing, humming or howling sounds. Duddell, a student at London Technical College, investigated this effect. He attached an LC circuit to the electrodes of an arc lamp, and the negative resistance of the arc excited audio frequency oscillations in the tuned circuit at its resonant frequency. Some of the energy was radiated as sound waves by the arc, producing a musical tone. To demonstrate his oscillator before the London Institute of Electrical Engineers, Duddell wired a series of tuned circuits to the arc and played a tune, "God Save The Queen". Duddell wasn't able to generate frequencies above the audio range with his "singing arc", but in 1902 Danish physicists Valdemar Poulsen and P. O. Pederson were able to increase the frequency produced into the radio range, inventing the Poulsen arc radio transmitter, the first continuous wave radio transmitter, which was used through the 1920s.
The vacuum tube feedback oscillator was invented around 1912, when it was discovered that feedback ("regeneration") in the recently invented audion vacuum tube could produce oscillations. At least six researchers independently made this discovery and can be said to have some role in the invention. In the summer of 1912, Edwin Armstrong observed oscillations in audion radio receiver circuits and went on to use positive feedback in his invention of the regenerative receiver. German Alexander Meissner independently discovered positive feedback and invented oscillators in March 1913. Irving Langmuir at General Electric observed feedback in 1913. Fritz Lowenstein may have preceded the others with a crude oscillator in late 1911. In Britain, H. J. Round patented amplifying and oscillating circuits in 1913. In August 1912, Lee De Forest, the inventor of the audion, had also observed oscillations in his amplifiers, but he didn't understand its significance and tried to eliminate it until he read Armstrong's patents in 1914, which he promptly challenged. Armstrong and De Forest fought a protracted legal battle over the rights to the "regenerative" oscillator circuit which has been called "the most complicated patent litigation in the history of radio". De Forest ultimately won before the Supreme Court in 1934 on technical grounds, but most sources regard Armstrong's claim as the stronger one.
The first and most widely used relaxation oscillator circuit, the astable multivibrator, was invented in 1917 by French engineers Henri Abraham and Eugene Bloch. They called their cross-coupled, dual vacuum tube circuit a multivibrateur because the square-wave signal it produced was rich in harmonics, compared to the sinusoidal signal of other vacuum tube oscillators.
Vacuum tube feedback oscillators became the basis of radio transmission by 1920. However the triode vacuum tube oscillator performed poorly above 300 MHz because of interelectrode capacitance. To reach higher frequencies, new "transit time" (velocity modulation) vacuum tubes were developed, in which electrons traveled in "bunches" through the tube. The first of these was the Barkhausen-Kurz oscillator (1920), the first tube to produce power in the UHF range. The most important and widely used were the klystron (R. and S. Varian, 1937) and the cavity magnetron (J. Randall and H. Boot, 1940).
Mathematical conditions for feedback oscillations, now called the Barkhausen criterion, were derived by Heinrich Georg Barkhausen in 1921. The first analysis of a nonlinear electronic oscillator model, the Van der Pol oscillator, was done by Balthasar van der Pol in 1927. He showed that the stability of the oscillations (limit cycles) in actual oscillators was due to the nonlinearity of the amplifying device. He originated the term "relaxation oscillation" and was first to distinguish between linear and relaxation oscillators. Further advances in mathematical analysis of oscillation were made by Hendrik Wade Bode and Harry Nyquist in the 1930s. In 1969 K. Kurokawa derived necessary and sufficient conditions for oscillation in negative resistance circuits, which form the basis of modern microwave oscillator design.
- Snelgrove, Martin (2011). "Oscillator". McGraw-Hill Encyclopedia of Science and Technology, 10th Ed., Science Access online service. McGraw-Hill. Retrieved March 1, 2012.
- Chattopadhyay, D. (2006). Electronics (fundamentals And Applications). New Age International. pp. 224–225. ISBN 81-224-1780-9.
- Garg, Rakesh Kumar; Ashish Dixit; Pavan Yadav (2008). Basic Electronics. Firewall Media. p. 280. ISBN 8131803023.
- Kung, Fabian Wai Lee (2009). "Lesson 9: Oscillator Design" (PDF). RF/Microwave Circuit Design. Prof. Kung's website, Multimedia University. Retrieved October 17, 2012. External link in
|work=(help), Sec. 3 Negative Resistance Ocillators, p. 9-10, 14
- Räisänen, Antti V.; Arto Lehto (2003). Radio Engineering for Wireless Communication and Sensor Applications. USA: Artech House. pp. 180–182. ISBN 1580535429. Cite error: Invalid
<ref>tag; name "R.C3.A4is.C3.A4nen" defined multiple times with different content (see the help page).
- Ellinger, Frank (2008). Radio Frequency Integrated Circuits and Technologies, 2nd Ed. USA: Springer. pp. 391–394. ISBN 3540693246.
- Maas, Stephen A. (2003). Nonlinear Microwave and RF Circuits, 2nd Ed. Artech House. pp. 542–544. ISBN 1580534848.
- Morse 1925, p. 23
- US 500630, Thomson, Elihu, "Method of and Means for Producing Alternating Currents", published 18 July 1892, issued 4 July 1893
- G. Fitzgerald, On the Driving of Electromagnetic Vibrations by Electromagnetic and Electrostatic Engines, read at the January 22, 1892 meeting of the Physical Society of London, in Larmor, Joseph, ed. (1902). The Scientific Writings of the late George Francis Fitzgerald. London: Longmans, Green and Co. pp. 277–281.
- Hong, Sungook (2001). Wireless: From Marconi's Black-Box to the Audion. MIT Press. ISBN 0262082985., pp. 161–165
- Morse 1925, pp. 80–81
- GB 190021629, Duddell, William du Bois, "Improvements in and connected with Means for the Conversion of Electrical Energy, Derived from a Source of Direct Current, into Varying or Alternating Currents", published 29 Nov 1900, issued 23 Nov 1901
- Morse 1925, p. 31
- GB 190315599, Poulsen, "Improvements relating to the Production of Alternating Electric Currents", issued 14 July 1904
- US 789449, Poulsen, Valdemar, "Method of Producing Alternating Currents with a High Number of Vibrations", issued 9 May 1905
- Hempstead, Colin; William E. Worthington (2005). Encyclopedia of 20th-Century Technology 2. Taylor & Francis. p. 648. ISBN 1579584640.
- Hong 2001, p. 156
- Fleming, John Ambrose (1919). The Thermionic Valve and its Developments in Radiotelegraphy and Telephony. London: The Wireless Press. pp. 148–155.
- Hong, Sungook (2003). "A history of the regeneration circuit: From invention to patent litigation" (PDF). IEEE. Retrieved August 29, 2012., pp. 9–10
- Armstrong, Edwin H. (September 1915). "Some recent developments in the Audion receiver" (PDF). Proc. of the IRE (New York: Institute of Radio Engineers) 3 (9): 215–247. doi:10.1109/jrproc.1915.216677. Retrieved August 29, 2012.
- Hong 2003, p. 13
- Hong 2003, p. 5
- Hong 2003, pp. 6–7
- Hijiya, James A. (1992). Lee De Forest and the Fatherhood of Radio. Lehigh University Press. pp. 89–90. ISBN 0934223238.
- Hong 2003, p. 14
- Nahin, Paul J. (2001). The Science of Radio: With Matlab and Electronics Workbench Demonstration, 2nd Ed. Springer. p. 280. ISBN 0387951504.
- Hong 2001, pp. 181–189
- Hong 2003, p. 2
- Abraham, H.; E. Bloch (1919). "Measurement of period of high frequency oscillations". Comptes Rendus (French Academy of Sciences) 168: 1105.
- Glazebrook, Richard (1922). A Dictionary of Applied Physics, Vol. 2: Electricity. London: Macmillan and Co. Ltd. pp. 633–634.
- Calvert, James B. (2002). "The Eccles-Jordan Circuit and Multivibrators". Dr. J. B. Calvert website, Univ. of Denver. Retrieved May 15, 2013. External link in
- Van der Pol, Balthazar (1927). "On relaxation-oscillations". The London, Edinburgh and Dublin Philosophical Magazine 2 (7): 978–992. doi:10.1080/14786442608564127.
- Nyquist, H. (January 1932). "Regeneration Theory" (PDF). Bell System Tech. J. (USA: American Tel. & Tel.) 11 (1): 126–147. doi:10.1002/j.1538-7305.1932.tb02344.x. Retrieved December 5, 2012. on Alcatel-Lucent website
- Kurokawa, K. (July 1969). "Some Basic Characteristics of Broadband Negative Resistance Oscillator Circuits" (PDF). Bell System Tech. J. (USA: American Tel. & Tel.) 48 (6): 1937–1955. doi:10.1002/j.1538-7305.1969.tb01158.x. Retrieved December 8, 2012. Eq. 10 is necessary condition for oscillation, eq. 12 is sufficient condition.
- Morse, A. H. (1925), Radio: Beam and Broadcast: Its story and patents, London: Ernest Benn. History of radio in 1925. Oscillator claims 1912; De Forest and Armstrong court case cf p. 45. Telephone hummer/oscillator by A. S. Hibbard in 1890 (carbon microphone has power gain); Larsen "used the same principle in the production of alternating current from a direct current source"; accidental development of vacuum tube oscillator; all at p. 86. Von Arco and Meissner first to recognize application to transmitter; Round for first transmitter; nobody patented triode transmitter at p. 87.
- Ulrich Rohde, Ajay Poddar, and Georg Bock, The Design of Modern Microwave Oscillators for Wireless Applications: Theory and Optimization, (543-pages) John Wiley & Sons, 2005, ISBN 0-471-72342-8.
- E. Rubiola, Phase Noise and Frequency Stability in Oscillators Cambridge University Press, 2008. ISBN 978-0-521-88677-2.
|Wikimedia Commons has media related to Electronic oscillators.| | https://en.wikipedia.org/wiki/Electronic_oscillator |
4 | If you like us, please share us on social media, tell your friends, tell your professor or consider building or adopting a Wikitext for your course.
The terms phenyl and phenol, along with benzene and benzyl, can confuse beginning organic chemistry students. Figure 1 shows what the four terms mean.
Figure 1: The top two structures in Figure 1 are molecules. The -ene suffix of benzene might indicate that it is similar to an alkene. The -ol suffix of phenol indicates that it has an -OH group.
The lower two structures in Figure 1 show groups. The line extending off without anything connected is the line that shows this is a group, which should be attached to something. For example, one might have phenyl chloride (C6H5Cl, also called chlorobenzene) or one might have benzyl chloride (C6H5CH2Cl). (The structures of these two compounds are shown below in Figure 2.) The phenyl group is based simply on benzene, with one H removed. The benzyl group is based on methylbenzene (toluene), with one H removed from the methyl group.
Figure 2 shows the molecules benzyl chloride and phenyl chloride; these are based on the groups discussed above.
If you have the misfortune to come across the word benzol, be forewarned that this is an old German word for benzene. Similarly, toluol is a German word for toluene. Benzin is a German word for gasoline; it is related to the uncommon term benzine, used for some types of gasoline. | http://chemwiki.ucdavis.edu/Organic_Chemistry/Hydrocarbons/Arenes/Properties_of_Arenes/The_Phenyl_Group |
4.125 | by Chris Woodford. Last updated: January 23, 2016.
You're screaming through the sky, safely tucked up in the cockpit of a jet fighter, when there's a sudden loud bang and the engine judders to a halt. Well that's just great, isn't it? Here you are zooming along at maybe 2000 km/h (1200mph), several kilometers/miles above the ground and your plane has chosen this exact moment to break down! What do you do? Eject as soon as you possibly can, wait for the plane to fly clear, and then hit your parachute. With luck, you glide safely to the ground and live to fly another day. When it comes to saving lives, parachutes are among the simplest and most effective of inventions. How exactly do they work? Let's take a closer look!
Photo: Ordinary parachutes are dome-shaped and, with their dangling suspension lines, look a bit like jellyfish as they fall. Note the vent holes that allow air to escape: they help to prevent the parachute from rocking about as it falls and provide very basic steering (though nothing like as sophisticated as on ram-air parachutes). Photo by Chris Desmond courtesy of US Navy.
How does a parachute work in theory?
Photo: Sophisticated ram-air parachutes like this have a number of cells that inflate (fill with air) so they form a curved airfoil wing. They are much more steerable and controllable than dome-shaped parachutes. Photo by Shannon K. Cassidy courtesy of US Navy.
Throw a ball up in the air and, sooner or later, it always falls back to the ground. That's because Earth pulls everything toward it with a force called gravity. You've probably learned in school that the strength of Earth's gravity is roughly the same all over the world (it does vary a little bit, but not that much) and that if you drop a heavy stone and a light feather from the top of a skyscraper, gravity pulls them toward the ground at exactly the same rate.
If there were no air, the feather and the stone would hit the ground at the same time. In practice, the stone reaches the ground much faster, not because it weighs more but because the feather fans out and catches in the air as it falls. Air resistance (also called drag) slows it down.
Photo: Parachutes are made from strong lightweight nylon and have to be packed very carefully if they're to open correctly when they're deployed. Photo by Gary Ward courtesy of US Navy.
What causes air resistance?
Just because the air's invisible, doesn't mean it's not there. Earth's atmosphere is packed full of gas molecules, so if you want to move through air—by walking, in a car, in a plane, or dangling from a parachute—you have to push them out of the way. We only really notice this when we're moving at speed.
Air resistance is a bit like the way water pushes against your body when you're in a swimming pool—except that air is invisible! If you jump off a diving board or do a belly flop, the awkward shape of your body will create a a lot of resistance and bring you rapidly to a halt when you crash into the water. But if you make a sharp pointed shape with your arms and dive in gracefully, your body will part the water cleanly and you'll continue to move quickly as you enter it. When you jump or belly flop, your body slows down quickly because the water can't get out of the way fast enough. When you dive, you part the water smoothly in front of you so your body can glide through it quickly. With parachutes, it's the slowing-down effect that we want.
If you fall from a plane without a parachute, your relatively compact body zooms through the air like a stone; open your parachute and you create more air resistance, drifting to the ground more slowly and safely—much more like a feather. Simply speaking, then, a parachute works by increasing your air resistance as you fall.
When a force pulls on something, it makes that object move more quickly, causing it to gain speed. In other words, it causes the object to accelerate. Like any other force, gravity makes falling objects accelerate—but only up to a point.
If you jump off a skyscraper, your body ought to speed up by 10 meters per second (32ft per second) every single second you're falling. We call that an acceleration of 10 meters per second per second (or 10 meters per second squared, for short, and write it like this: 10m/s/s or 10m/s2). If you were high enough off the ground, then after about a minute and a half (let's say 100 seconds), you'd theoretically be falling at about 1000 meters per second (3600km/h or 2200 mph), which is about as fast as the fastest jet fighters have ever flown!
In practice, that simply doesn't happen. After about five seconds, you reach a speed where the force of air resistance (pushing you upward) increases so much that it balances the force of gravity (pulling you downward). At that point, there is no net acceleration and you keep on falling at a steady speed called your terminal velocity. Unfortunately, the terminal velocity for a falling person (with arms stretched out in the classic freefall position) is about 55 meters per second (200km/h or 125 mph), which is still plenty fast enough to kill you—especially if you're falling from a plane!
Photo: Left: Freefall in theory: In this training exercise, the skydiver is practicing freefall by floating over a huge horizontally mounted air fan. The force of the air pushing upward is exactly equal to the diver's weight pulling him downward so he floats in mid-air. Photo by Gary L. Johnson courtesy of US Navy. Right: Freefall in practice: In reality, it's not the air that moves past you—you move through the air—but the physics is still the same: once you reach terminal velocity, the force of the air on your body pushing you upward exactly equals the force of gravity pulling you down. Photo by Ashley Myers courtesy of US Navy.
How much does a parachute slow you down?
Feathers fall more slowly than stones because their terminal velocity is lower. So another way of understanding how a parachute works is to realize that it dramatically lowers your terminal velocity by increasing your air resistance as you fall. It does that by opening out behind you and creating a large surface area of material with a huge amount of drag. Parachutes are designed to reduce your terminal velocity by about 90 percent so you hit the ground at a relatively low speed of maybe 5–6 meters per second (roughly 20 km/h or 12 mph)—ideally, so you can land on your feet and walk away unharmed.
How does a parachute work in practice?
Skydivers make parachuting look easy, but it's all a bit more tricky in practice! What you're trying to achieve is to get a large piece of super-strong material opening out above and behind you in a perfectly uniform way when you've just jumped from a plane screaming along maybe ten times faster than a race car! How can you possibly pull something safely behind you under those conditions?
Parachutes are actually three chutes in one, packed into a single backpack called the container. There's a main parachute, a reserve parachute (in case the main one fails), and a tiny little chute at the bottom of the container, called the pilot chute, that helps the main chute to open. Once you're clear of the plane, you trigger the pilot chute (either by pulling on a ripcord or simply by throwing the pilot chute into the air). It rapidly opens up behind you, creating enough force to tug the main chute from the container. The main chute has to be carefully packed so the ropes that connect it to your harness (known as suspension lines) open correctly and straighten out behind you. The main chute is designed to open in a delayed way so your body isn't braked and jerked too suddenly and sharply. That's safer and more comfortable for you and it also reduces the risk of the parachute ripping or tearing.
The force on a parachute is considerable, so it has to be made from really strong materials. Originally, parachutes were made from canvas or silk, but inexpensive, lightweight, synthetic materials such as nylon and Kevlar® (a chemical relative of nylon) are now generally used instead.
Parachutes were invented about a century ago, but they continue to evolve, as inventors devise ever-better ways to improve their safety and handling. Here's a more advanced 'chute, designed for the US Army in 2001 (and patented in 2003). It contains the same basic features as other chutes: a canopy (10, blue), a skirt underneath (12), and suspension lines (14) in four groups called risers (16), attached to a bridle (22), which supports the harness (26) and parachutist (P). But it also has two improved safety features to reduce the risk of the parachutist landing too fast and too hard. At the top, the parachute has a bridle with an extra loop of rope on either side and an electrical cutting mechanism to release it (pink, top, labeled 28). In the middle, it has what's known as a pneumatic muscle (bright green, 24). There's an altitude measuring device (gray, top, 34, 36, 44), which projects radar beams to the ground to measure your height and speed and figure out when the safety mechanisms need to be deployed.
How does it work? That's shown in the artwork on the right. If the wind blows you too fast horizontally, the appropriate electrical mechanism releases one of the extra side ropes, causing the parachute to tilt to the opposite side, so reducing your speed. When you near the ground, if you're going too fast, the pneumatic muscle shortens, pulling you much closer toward the canopy, and so reducing your speed.Artworks: Courtesy of US Patent and Trademark Office, from US Patent 6,575,408: Soft landing assembly for a parachute by Richard J. Benney and Glen J. Brown, assigned to the United States Of America (US Army), June 10, 2003. | http://www.explainthatstuff.com/how-parachutes-work.html |
4.03125 | Sometimes it's hard to know what to do first with a mathematical equation. The order of operations, sometimes called PEMDAS, is how we know what to operation to do first so that we always get the right answer. When adding, subtracting, multiplying, or dividing numbers, if we didn't use the order of operations we would get different answers for the same equation.
A lot of times students come to Math class ready to do problems like this. What I see is students sit down and start tracking through 8+8 is 16 then they go ahead and they divide by 4 and then take away 6. So the student just keeps going, says okay 16 divided by 4 is 4 take away 6 and gets the answer -2. Does that look right to you? See if you can think about that and see if you think that's a correct answer. I'm going to tell you that is actually Mathematically incorrect. Everything the student did was like a right answer like 8+8 is 16 and 2 times itself is 4 but there is something really tricky about Math and it's called the "Order of Operations". Before I tell you what the Order of Operations is lets look at this one, this is the exact same problem but what if a student came at it like this, they said okay "8+8 I'll deal with that later. I'm going to start by doing I don't know 2 times itself is 4 and then take away 6." That's totally fine I mean like maybe the student started out like this, so they did 16 divided by 4 or first they did 4 take away 6 and said "okay that is going to be -2." So this person is doing all the right Math right. I mean 16 divided by 2 they got their answer was -8. Is that okay? You guys Math is like a total drag because of this, we wrote the same problem and you're doing everything Mathematically correct but you get 2 different answers they're totally different. So the thing I want to talk about today is what is called the Order of Operations most people remember it using this acronym "PEMDAS" which stands for Please Excuse My Dear Aunt Sally that's another way you could remember it. But each letter stands for a Mathematical thing, P stands for parenthesis, E is for exponents, M multiply, divide, add, subtract.
Before we go on I'm going to show one other trick to this PEMDAS business and that's these little arrows that help me because when you come to multiplying and dividing, or the adding and subtracting step you need to be really careful to move from the left to right. You do parenthesis first, then you do the exponents then you do multiplying and dividing from left to right then you do adding, subtracting from left to right. So why don't we visit this problem we started with only now lets do it the correct way. First we want to do parenthesis, there are none so that's okay, next thing we need to do is exponents because P E. E stands for exponents, 8+8 divided by 2 squared or times itself is equal to 4 and take away 6. So we've done parenthesis, we've done exponents now I need to multiply and divide from left to right. So look for a multiply or divide symbol and there it is right there, the first 8 stays the same, 8 divided by 4 is 2 and then take away 6.
My final step is to do the adding, subtracting from left to right. So I'll have 10 take away 6 which is 4, this is the correct answer. Both of those people got it wrong, they did the right Math just in the wrong order. It's really tricky and it's something that you're going to make mistakes on probably in your future so if you can really focus on learning to do the Order of Operations correctly now, it'll help you so much in your Math future. Before you start doing your homework or look at examples I want you to think about one more thing and that's how there's different kinds of parenthesis. Sometimes you see a parenthesis like this, sometimes you see them like that, sometimes you see them like these little swirly things, sometimes you get absolute value marks. These are all different kinds of parenthesis or groupings, you need to be careful that whatever kind of parenthesis grouping you have, you do that piece first before you move on to the exponents. So if you don't remember anything else from this video, I hope you guys remember PEMDAS, Please Excuse My Dear Aunt Sally or parenthesis, exponents, multiply, divide from left to right and subtract from left to right. | https://www.brightstorm.com/math/algebra/pre-algebra/order-of-operations/ |
4.03125 | In music, polyphony is a texture consisting of two or more simultaneous lines of independent melody, as opposed to a musical texture with just one voice which is called monophony, and in difference from musical texture with one dominant melodic voice accompanied by chords which is called homophony.
Within the context of the Western musical tradition, the term is usually used to refer to music of the late Middle Ages and Renaissance. Baroque forms such as fugue, which might be called polyphonic, are usually described instead as contrapuntal. Also, as opposed to the species terminology of counterpoint, polyphony was generally either "pitch-against-pitch" / "point-against-point" or "sustained-pitch" in one part with melismas of varying lengths in another. In all cases the conception was probably what Margaret Bent (1999) calls "dyadic counterpoint", with each part being written generally against one other part, with all parts modified if needed in the end. This point-against-point conception is opposed to "successive composition", where voices were written in an order with each new voice fitting into the whole so far constructed, which was previously assumed.
The term polyphony is also sometimes used more broadly, to describe any musical texture that is not monophonic. Such a perspective considers homophony as a sub-type of polyphony.
Traditional (non-professional) polyphony has a wide, if uneven, distribution among the peoples of the world. Most polyphonic regions of the world are in sub-Saharan Africa, Europe and Oceania. It is believed that the origins of polyphony in traditional music vastly predate the emergence of polyphony in European professional music. Currently there are two contradictory approaches to the problem of the origins of vocal polyphony: the Cultural Model, and the Evolutionary Model. According to the Cultural Model, the origins of polyphony are connected to the development of human musical culture; polyphony came as the natural development of the primordial monophonic singing; therefore polyphonic traditions are bound to gradually replace monophonic traditions. According to the Evolutionary Model, the origins of polyphonic singing are much deeper, and are connected to the earlier stages of human evolution; polyphony was an important part of a defence system of the hominids, and traditions of polyphony are gradually disappearing all over the world.:198-210
Although the exact origins of polyphony in the Western church traditions are unknown, the treatises Musica enchiriadis and Scolica enchiriadis, both dating from c. 900, are usually considered the oldest extant written examples of polyphony. These treatises provided examples of two-voice note-against-note embellishments of chants using parallel octaves, fifths, and fourths. Rather than being fixed works, they indicated ways of improvising polyphony during performance. The Winchester Troper, from c. 1000, is the oldest extant example of notated polyphony for chant performance, although the notation does not indicate precise pitch levels or durations.
European polyphony rose out of melismatic organum, the earliest harmonization of the chant. Twelfth-century composers, such as Léonin and Pérotin developed the organum that was introduced centuries earlier, and also added a third and fourth voice to the now homophonic chant. In the thirteenth century, the chant-based tenor was becoming altered, fragmented, and hidden beneath secular tunes, obscuring the sacred texts as composers continued to play with this new invention called polyphony. The lyrics of love poems might be sung above sacred texts in the form of a trope, or the sacred text might be placed within a familiar secular melody. The oldest surviving piece of six-part music is the English rota Sumer is icumen in (c. 1240).
These musical innovations appeared in a greater context of societal change. After the first millennium, European monks decided to start translating the works of Greek philosophers into the vernacular. Western Europeans were aware of Plato, Socrates, and Hippocrates during the Middle Ages. However they had largely lost touch with the content of their surviving works because the use of Greek as a living language was restricted to the lands of the Eastern Roman Empire (Byzantium). Once these ancient works started being translated thus becoming accessible, the philosophies had a great impact on the mind of Western Europe. This sparked a number of innovations in medicine, science, art, and music.
Western Europe and Roman Catholicism
European polyphony rose prior to, and during the period of the Western Schism. Avignon, the seat of the antipopes, was a vigorous center of secular music-making, much of which influenced sacred polyphony.
It was not merely polyphony that offended the medieval ears, but the notion of secular music merging with the sacred and making its way into the papal court. It gave church music more of a jocular performance quality removing the solemn worship they were accustomed to. The use of and attitude toward polyphony varied widely in the Avignon court from the beginning to the end of its religious importance in the fourteenth century. Harmony was not only considered frivolous, impious, and lascivious, but an obstruction to the audibility of the words. Instruments, as well as certain modes, were actually forbidden in the church because of their association with secular music and pagan rites. Dissonant clashes of notes give a creepy feeling that was labeled as evil, fueling their argument against polyphony as being the devil’s music. After banishing polyphony from the Liturgy in 1322, Pope John XXII spoke in his 1324 bull Docta Sanctorum Patrum warning against the unbecoming elements of this musical innovation. Pope Clement VI, however, indulged in it.
More recently, the Second Vatican Council (1962–1965) stated: "Gregorian chant, other things being equal, should be given pride of place in liturgical services. But other kinds of sacred music, especially polyphony, are by no means excluded.... Religious singing by the people is to be skillfully fostered, so that in devotions and sacred exercises, as also during liturgical services, the voices of the faithful may ring out”.
Notable works and artists
- Johann Sebastian Bach, List of famous compositions
- Tomas Luis de Victoria
- William Byrd, Mass for Five Voices
- Thomas Tallis
- Orlandus Lassus, Missa super Bella'Amfitrit'altera
- Guillaume de Machaut, Messe de Nostre Dame
- Jacob Obrecht
- Palestrina, Missa Papae Marcelli
- Josquin des Prez, Missa Pange Lingua
- Gregorio Allegri, Miserere
Protestant Britain and America
English Protestant west gallery music included polyphonic multi-melodic harmony, including fuguing tunes, by the mid-18th century. This tradition passed with emigrants to North America, where it was proliferated in tunebooks, including shape-note books like The Southern Harmony and The Sacred Harp. While this style of singing has largely disappeared from British and American sacred music, it survived in the rural Southern United States, until it again began to grow a following throughout the United States and even in places such as Ireland, the United Kingdom, Poland, Australia and New Zealand, among others.
- Byzantine chant
- Ojkanje singing, in Croatia
- Ganga singing, in Croatia, Montenegro and Bosnia and Herzegovina
- Epirote singing, in northern Greece and southern Albania (see below)
- Iso-polyphony in southern Albania (see below)
- Gusle singing, in Serbia, Montenegro, Bosnia and Herzegovina, Croatia and Albania
- Lazarice singing, in Serbia
- Woman choirs of Shopi and Pirin, in Bulgaria
Balkan drone music is described as polyphonic due to Balkan musicians using a literal translation of the Greek polyphōnos ('many voices'). In terms of Western classical music, it is not strictly polyphonic, due to the drone parts having no melodic role, and can better be described as multipart.
The polyphonic singing tradition of Epirus is a form of traditional folk polyphony practiced among Aromanians, Albanians, Greeks, and Macedonian Slavs in southern Albania and northwestern Greece. This type of folk vocal tradition is also found in the Republic of Macedonia and Bulgaria. Albanian polyphonic singing can be divided into two major stylistic groups as performed by the Tosks and Labs of southern Albania. The drone is performed in two ways: among the Tosks, it is always continuous and sung on the syllable ‘e’, using staggered breathing; while among the Labs, the drone is sometimes sung as a rhythmic tone, performed to the text of the song. It can be differentiated between two-, three- and four-voice polyphony.
The phenomenon of Albanian folk iso-polyphony (Albanian iso-polyphony) has been proclaimed by UNESCO a "Masterpiece of the Oral and Intangible Heritage of Humanity". The term iso refers to the drone, which accompanies the iso-polyphonic singing and is related to the ison of Byzantine church music, where the drone group accompanies the song.
Polyphony in the Republic of Georgia is arguably the oldest polyphony in the Christian world. Georgian polyphony is traditionally sung in three parts with strong dissonances, parallel fifths, and a unique tuning system based on perfect fifths. Georgian Polyphonic Singing has been proclaimed by UNESCO an Intangible Cultural Heritage of Humanity. See Music of Georgia (country)#Traditional vocal polyphony. Polyphony plays a crucial role in Abkhazian traditional music. Polyphony is present in all genres where the social environment provides more than one singer to support the melodic line. Readers might remember (from the very beginning of this book) the recollection of I. Zemtsovsky, when a dozing Abkhazian started singing a drone to support an unknown to him singer. Abkhazian two and three-part polyphony is based on a drone (sometimes a double drone). Two part drone songs are considered by Abkhazian and Georgian scholars the most important indigenous style of Abkhazian polyphony. Two-part drone songs are dominating in Gudauta district, the core region of ethnic Abkhazians. Millennia of cultural, social and economic interactions between Abkhazians and Georgians on this territory resulted in reciprocal influences, and in particular, creation of a new, so-called “Georgian style” of three-part singing in Abkhazia, unknown among Adighis. This style is based on two leading melodic lines (performed by soloists - akhkizkhuo) singing together with the drone or ostinato base (argizra). Indigenous Abkhazian style of three-part polyphony uses double drones (in fourths, fifths, or octaves) and one leading melodic line at one time. Abkhazians use a very specific cadence: tetrachordal downwards movement, ending on the interval fourth.:55
Chechens and Ingushes
Both Chechen and Ingush traditional music could be very much defined by their tradition of vocal polyphony. As in other North Caucasian musical cultures, Chechen and Ingush polyphony is based on a drone. Unlike most of the other North Caucasian polyphonic traditions (where two-part polyphony is the leading type), Chechen and Ingush polyphony is mostly three-part. Middle part, the carrier of the main melody of songs, is accompanied by the double drone, holding the interval of the fifth “around” the main melody. Intervals and chords, used in Chechen and Ingush polyphony, are often dissonances (sevenths, seconds, fourths). This is quite usual in all North Caucasian traditions of polyphony as well, but in Chechen and Ingush traditional songs more sharp dissonances are used. In particular, a specific cadence, where the final chord is a dissonant three-part chord, consisting of fourth and the second on top (c-f-g), is quite unique for North Caucasia. Only on the other side of Caucasian mountains, in western Georgia, there are only few songs that finish on the same dissonant chord (c-f-g).:60-61
- Hendrik van der Werf (1997). "Early Western polyphony", Companion to Medieval & Renaissance Music. Oxford University Press. ISBN 0-19-816540-4.
- Margaret Bent (1999). "The Grammar of Early Music: Preconditions for Analysis", Tonal Structures of Early Music. New York: Garland Publishing. ISBN 0-8153-2388-3.
- DeVoto, Mark (2015). "Polyphony". Encyclopædia Britannica Online. Retrieved 2015-12-01.
- Jordania, Joseph (2011). Why do People Sing? Music in Human Evolution. Logos. pp. 60–70. ISBN 978-9941-401-86-2.
- Bruno Nettl. Polyphony in North American Indian music. Musical Quarterly, 1961, 47:354-362
- Joseph Jordania (2006). Who Asked the First Question? The Origins of Human Choral Singing, Intelligence, Language and Speech (PDF). Tbilisi: Logos. ISBN 99940-31-81-3.
- Riemann, Hugo. History of music theory, books I and II: polyphonic theory to the sixteenth century, Book 1. Da Capo Press. June 1974.
- Albright, Daniel (2004). Modernism and Music: An Anthology of Sources. University of Chicago Press. ISBN 0-226-01267-0.
- Riemann, Hugo. History of music theory, books I and II: polyphonic theory to the sixteenth century, Book 2. Da Capo Press. June 1974.
- Pope John XXII (1879). "Translated from the original Latin of the bull Docta sanctorum patrum as given in Corpus iuris canonici, ed. a. 1582" (PDF). pp. 1256–1257. line feed character in
|title=at position 101 (help)
- Vatican II, Constitution on the Liturgy, 112-118
- Selected Discography on Multipart Singing in Serbia & Montenegro
- Music-cultures in contact: convergences and collisions
- Koço, Eno (27 February 2015). A Journey of the Vocal Iso(n). Cambridge Scholars Publishing. p. xx. ISBN 978-1-4438-7578-3. A free, unpublished version of this passage is available on Google Books.
- Bart Plantenga. Yodel-ay-ee-oooo. Routledge, 2004. ISBN 978-0-415-93990-4, p. 87 Albania: "Singers in Pogoni region perform a style of polyphony that is also practised by locals in Vlach and Slav communities [in Albania].
- Engendering Song: Singing and Subjectivity at Prespa by Jane C. Sugarman,1997,ISBN 0-226-77972-6,page 356,"Neither of the polyphonic textures characteristic of south Albanian singing is unique to Albanians. The style is shared with Greeks in the Northwestern district of Epirus (see Fakiou and Romanos 1984) while the Tosk style is common among Aromanian communities from the Kolonje region of Albania the so called Farsherotii (see Lortat-Jacob and Bouet 1983) and among Slavs of the Kastoria region of Northern Greece (see N.Kaufamann 1959 ). Macedonians in the lower villages of the Prespa district also formerly sang this style "
- European voices: Multipart singing in the Balkans and the ..., Volume 1 By Ardian Ahmedaja, Gerlinde Haid page 241
- "Albanian Folk Iso-polyphony". UNESCO. Retrieved 31 December 2010.
- "Georgian Polyphonic Singing". UNESCO.
|Wikimedia Commons has media related to Polyphony.|
- Thirteenth-Century Polyphony
- Tuning and Intonation in Fifteenth and Sixteenth Century Polyphony
- World Routes in Albania - Iso-Polyphony in Southern Albania on BBC Radio 3
- World Routes in Georgia - Ancient polyphony from the Caucasus region on BBC Radio 3
- Aka Pygmy Polyphony African Pygmy music, with photos and soundscapes | https://en.wikipedia.org/wiki/Polyphonic |
4.09375 | There are several calculations that a country can make when trying to measure its economic progress. The gross domestic product (GDP) has become the foremost measure of economic activity for most countries. It is the measure of a nation's goods and services that it produces over a period of time. The goods and services that are measured are those that the country actually produces within its borders. The GDP is expressed in terms of its own local economy. Since the measurement of GDP is widely used and expressed it is important to know how to calculate the growth rate of nominal GDP.
1Calculate the GDP for the latest period of the comparison. A growth rate requires that 2 separate time periods be compared. There are 4 areas of spending that need be added together to make up GDP.
- Add together that years' consumer spending or consumption. This is the sum that consumers spend on durable goods, non durable goods, and services. This would include personal items such as food, clothing, rent, and services like health care. Imported goods are not included.
- Add together all investments. This is all of the money spent on capital equipment, increases in inventory, and structures. Items that would fall into this category include the purchase of business machinery, new homes, and the construction of new factories. Stocks and bonds are not included here since they do not add to any actual output.
- Add together all government spending. This is the accumulation of spending by all levels of government on goods and services. Examples include military purchases, teacher pensions and government salaries. Deducted from this amount are government transfer payments such as welfare or unemployment payments.
- Determine the net exports. The sum of all imports is calculated and then subtracted from the sum of all exports. The difference between the amount of foreign goods consumed domestically and the amount of domestic goods consumed in foreign countries makes up the net exports. When exports exceed imports, it adds to the amount of GDP.
2Calculate the GDP for the prior period using the same 4 areas of spending for that period.
3Compare the prior GDP(1) to the latest GDP(2). The growth rate (GR) equals GDP(2) divided by GDP(1) minus 1. In other words GR=[GDP(2)/GDP(1)]-1.
Questions and Answers
Give us 3 minutes of knowledge!
- Another measure of economic growth often referred to is the gross national product (GNP). The GNP is the sum of all production of a nation's permanent residents regardless if the production occurred within that country's borders.
- Many economist use real GDP instead of nominal GDP when determining the growth rate of an economy. Real GDP adds one more step to the summation of nominal GDP by factoring out inflation or deflation from GDP.
- The GDP of a nation only collects total output but does not distinguish between beneficial transactions or destructive transactions. Economic activity can be stimulated due to the impact of natural disasters or the effect of crime or contentious litigation. Money spent on medical costs due to pollution, the cleanup of environmental destruction or the construction of a new prison is added to the GDP and considered as adding to the economic growth of a country. The GDP also ignores income distribution amongst workers within a country. While the GDP might reflect significant growth, it is possible that only the top wage earners may see any benefit.
In other languages:
Español: calcular la tasa de crecimiento del PIB nominal, Русский: посчитать темпы роста номинального ВВП, Português: Calcular a Taxa de Crescimento do PIB Nominal, Italiano: Calcolare il Tasso di Crescita del PIL Nominale
Thanks to all authors for creating a page that has been read 71,918 times. | http://www.wikihow.com/Calculate-the-Growth-Rate-of-Nominal-GDP |
4.0625 | 1. Domain Name System (DNS), a locator service in Windows, is an industry-standard protocol that locates computers on an IP-based network. IP networks, such as the Internet and Windows networks, rely on number-based addresses to process data. Users however, can more easily remember name addresses, so it is necessary to translate user-friendly names (www.microsoft.com) into addresses that the network can recognize (22.214.171.124). Before DNS, a Hosts file was used -- a manually created file residing on a host computer that associates host names with IP addresses -- still used today in fact.
Note: For instance, Host name addresses such as www.yahoo.com are addresses you see and may use every day and are what we recognize as intellectual information. IP addresses are numbers such as 126.96.36.199 that mean the same thing and which the computers uses to actually find the sites. Even though a user may use either the "Host" or "IP address" as a site address using Internet Explorer, the computer must first look up and translate the "Host name" to an "IP address" before a connection is made.
2. DNS Servers map IP addresses to computer names and computer names to IP addresses. By doing so, they provide the mechanism to locate network resources. The DNS WMI Provider allows applications to interact with DNS Servers through the unified management framework of Windows Management Instrumentation (WMI). A DNS Server is a computer that completes the process of name resolution in DNS and contains zone files that enable them to resolve names to IP addresses and IP addresses to names. When queried, a DNS Server will respond in one of three ways:
? The server returns the requested name-resolution or IP-resolution data.
? The server returns a pointer to another DNS Server that can service the request.
? The server indicates that it does not have the requested data.
3. A DNS zone is a set of files or records (more precisely, a database of resource record entries) that corresponds to part of the DNS hierarchical name space. DNS zones are used to delineate which DNS Servers are responsible (authoritative) for resolving name-resolution queries for a given section of the DNS hierarchy. DNS zones differ from the domain structure in the following fashion: zones can be composed of one or more DNS domains. One zone in the gadgets.widgets.microsoft.com domain tree might be authoritative for the gadgets and widgets domains.
4. DNS WMI Provider Overview
a. A provider is an architectural element of Windows Management Instrumentation (WMI). WMI defines a unified architecture for describing, accessing, and instrumenting objects. Part of this architecture is a large database of WMI classes used to carry out remote management tasks on specific objects.
b. WMI providers act as intermediaries between WMI and one or more managed objects. When WMI receives a request from a management application for data that is not available from the CIM repository or for notifications of events that WMI does not support, it forwards the request to a provider. Providers supply data and event notifications for managed objects that are specific to their particular domain. A provider extends the WMI schema of classes to allow WMI to work with new types of objects. The DNS WMI Provider defines classes for querying and configuring a DNS Server, along with its associated DNS zones and DNS records.
c. The DNS WMI provider exposes a number of DNS objects to clients, including DNS Server, DNS domain, and DNS RR objects. Through those objects, clients are able to perform DNS management activities.
a. "HOW TO: Configure Windows XP TCP/IP to Use DNS (Q305553)."
b. "Logging WMI Activity."
c. "Reinstalling WMI."
d. "Secrets of Windows Management Instrumentation."
6. The article [Q175722] describes the following errors you may receive when starting Internet Explorer, suggests troubleshooting procedures, and discusses the reasons for the anomaly below:
? The page cannot be displayed
? The page you are looking for is currently unavailable. The Web site might be experiencing technical difficulties, or you may need to adjust your browser settings.
? Cannot find server or DNS Error
a. Multiple copies of the Wsock32.dll file are installed on your computer.
b. An incorrect version of the Wsock32.dll file is installed on your computer.
c. If you try to view a file (file://) you do not have permissions to view.
d. Intermittent connection problems, low system resources, and dropped connections while attempting to load the Web page.
e. You are using America Online as your Internet service provider, and there is not a Dial-Up Adapter installed, but there is an AOL Adapter.
f. Unable to resolve the DNS name, or the DNS server returned an error.
g. Corrupted cookies can also cause this issue with Internet Explorer 5.
h. The Internet Explorer connection settings for the dial-up connection are configured to use a proxy server.
a. Download and follow the instructions for "IEFix" - a general purpose fix for Internet Explorer (Win 98/ME/2000/XP):
Note: Else, some of the core Internet Explorer "?.dll files" may not be correctly registered or need registering. First, verify the exact path of where the Iexplore.exe file is found and used as noted for the "Primary. . ." example. Second, click Start, Run, type exactly "Primary Hard Drive Letter:\Program Files\Internet Explorer\iexplore.exe" /rereg, and then either click OK or press Enter.
? Registers Urlmon.dll, Mshtml.dll, Actxprxy.dll, Oleaut32.dll, Shell32.dll, Shdocvw.dll, [Q281679].
? Refreshes Internet Explorer using IE.INF method. Note:
"Unable to Install Internet Explorer 6 on Windows XP (Q304872)"
"How to Reinstall or Repair Internet Explorer and Outlook Express in Windows XP (Q318378)"
? Initiates "SFC /Scannow" (Win2K&XP), [Q310747].
Caveat: Using IEFix myself, the utility does not suggest or require a reboot, but I do suggest that you do. In addition, if an extra icon for IE is located on the Desktop afterwards, you may delete it.
b. "WinSock XP Fix" (Image) offers a last resort if your Internet connectivity has been corrupted due to invalid or removed registry entries. It can often cure the problem of lost connections after the removal of Adware components or improper uninstall of firewall applications or other tools that modify the XP network and Winsock settings. If you encounter connection problems after removing network related software, Adware or after registry clean-up; and all other ways fail, then "download" and give WinSock XP Fix a try, a 1,412kb file. It can create a registry backup of your current settings, so it is fairly safe to use.
c. "LSP-Fix" is a free utility that may be downloaded to repair certain problems associated with Internet software when you can no longer access Web sites due to bugs in the LSP software or deletion of software. LSP-Fix repairs the Winsock LSP chain by removing the entries left behind when LSP software is removed by hand (or when errors in the software itself break the LSP chain), and removing any gaps in the chain. | http://www.cnet.com/forums/discussions/can-t-find-server-or-dns-fault-131550/ |
4.125 | One hundred years after the RMS Titanic foundered in icy waters 375 miles south of Newfoundland, the dangers of vessels striking an iceberg continue.
Shipboard radar, satellite photos, global positioning systems (GPS) and aircraft patrols have made the North Atlantic safer now than it was during the early 1900s.
However, despite improvements in detection methods and more accurate ship positions, as well as trending warmer seas melting the icebergs faster, ships continue to have close encounters with these frozen, floating objects.
According to the BBC, between 1980 and 2005 there have been 57 incidents with vessels involving icebergs.
The greatest danger from icebergs today is from much smaller objects than portrayed here. It is believed that the RMS Titanic struck a small- to medium-sized iceberg. Graphic by Al Blasko, AccuWeather.com
On Nov. 23, 2007, the MS Explorer struck submerged ice, believed to be part of an iceberg, and sank in the Southern Ocean.
While the number of icebergs tends to vary greatly from year to year there are, on average, 15,000 icebergs born annually. Interestingly, in some years there can be up to 40,000 icebergs calved. In the Northern Hemisphere, between 1 and 2 percent of all the icebergs reach southward to 48 degrees North.
The most significant problem facing shipping and detection measures has to do with the size of the icebergs.
In the Northern Hemisphere, most of the icebergs are calved from West Greenland glaciers. Calving occurs when pieces of the ice break off and float into the sea, or when a large iceberg breaks up into a smaller one. From Greenland, the surviving icebergs eventually drift southward via the Labrador Current into the northwestern Atlantic Ocean.
Image appears courtesy of the United States Coast Guard.
In the northwestern Atlantic, as the cold Labrador current interacts with the warm Gulf Stream, eddies form. These swirls of water, combined with surface winds can transport the icebergs farther south (and east) on occasion.
In the Southern Hemisphere, the calving occurs around Antarctica from ice shelves and glaciers. Similarly wind and currents transport the icebergs away from the South Pole continent.
According to Dr. Peter Wadhams, "There are more icebergs now than there were in 1912."
Wadhams is Professor of Ocean Physics, Department of Applied Mathematics and Theoretical Physics, at the University of Cambridge, United Kingdom.
"During the past 10 years, the downhill flow rate of the Greenland glaciers has doubled in speed and is contributing to a larger number of icebergs being calved," Wadhams said.
Based on a study done by the Geological Survey of Denmark and Greenland, which tracked calving of Greenland icebergs as far back as 1890, the calving rate in recent years was matched only during a period during the 1930s.
Wadhams stated that warmer seas were accelerating the melting process, but at the same time are calving smaller bergs out of the larger ones.
The smaller icebergs are known as growlers (less than 3.3 feet high by less than 16 feet long) and bergy bits (3.3 to 16 feet high by 16 to 49 feet long).
"The growlers and bergy bits are difficult to detect by radar and satellite, yet are still capable of damaging or sinking a ship. Since there are more icebergs and they are melting faster, we can expect a bigger population of growlers and berg bits, so more danger to shipping," Wadhams explained.
There continues to be more evidence that ocean temperatures have been rising and for a longer period of time than once thought.
Ocean water warming was believed to have initiated about 50 years ago, but is now believed to have begun over 100 years ago according to a study done by the Scripps Institution of Oceanography at UC San Diego.
According to a National Aeronautics and Space Administration (NASA) funded 2006 study that spanned 20 years, the combined loss of mountain glaciers and ice caps averaged 402 gigatonnes per year.
The retreat of Arctic sea ice has opened new, shorter fuel-saving routes for shipping during the warmer months of the year. Sea ice is different than icebergs and forms as sea surface water freezes.
In addition, a number of vessels, including super freighters are ripping along routes through the Southern Ocean, avoiding the log jam of vessels in the Panama and Suez canals.
However, since there are more ships venturing into polar waters these days, the risk of collision from vessels striking the larger number of growlers or bergy bits out there also increases.
Not only are there dangers to ships, but also petroleum platforms in northern and southern latitudes.
So while research, technology and patrols over the past 100 years have made the sea less perilous in terms of striking the big icebergs, significant risks continue for coming in contact with the smaller, yet potentially destructive growlers and bergy bits.
In the months that followed after the Titanic sank near Newfoundland in 1912, the United States and 12 other nations formed the International Ice Patrol to warn ships of icebergs in the North Atlantic. This was joined by aircraft patrols in the 1930s, radar after World War II and improved satellite resolution and patrols during the latter half of the 20th century. The U.S. National Ice Center currently uses satellites to track large icebergs near Antarctica.
This image made from a webcam on board the MS Nordnorge shows the the Liberian-flagged M/S Explorer listing in Antarctic waters, Friday, Nov. 23, 2007. More than 150 passengers and crew took to lifeboats in Antarctic waters. (AP Photo)
The RMS Titantic foundered, bow first, on April 15, 1912. Of the 2,224 passengers, 1,514 drowned or succumbed to hypothermia in freezing waters. The wreck lies in 12,415 feet of water.
Observations on board Titanic indicated a 10 degree (F) drop in sea surface temperatures (from the lower 40s to the lower 30s) in two hours during the early evening of the 14th. This supports the idea that the Titanic passed from relatively warm Gulf Stream waters to the colder influence of the Labrador Current.
The water temperature in Titanic's vicinity at the time of the collision late in the evening of the 14th was said to be in the upper 20s. Sea water freezes at a lower temperature (approximately 28.4 degrees F) than freshwater.
The extremely frigid waters expedited hypothermia among the mass of people adrift. Most died within minutes after plunging into the sea.
A blast of arctic air will be accompanied by flurries and even a localized wall of snow in some communities in the Northeast and parts of the Midwest at the start of the Valentine's Day weekend.
Spring of 2016 could rank in the top 10 warmest on record for Canada.
The coldest air of the winter will plunge southward across much of the eastern United States and will feature single-digit and sub-zero temperatures in the Northeast during Valentine's Day weekend.
A multi-vehicle accident involving cars and tractor-trailers occurred amid snowy weather and caused the shutdown of Interstate 90 in Lake County, Ohio on Wednesday afternoon.
Conditions will be favorable for lake-effect snow through the end of the week, threatening low visibility and dangerous travel conditions.
As winter weather approaches, concern for pet safety grows. Make sure you know these useful tips.
Washington, D.C. (1899)
-15 F., all time record low (3rd day in a row at least -7 F.
Tallahassee, FL (1899)
(11th-14th) During an arctic outbreak temps fell to -2 F., the lowest ever registered in the sunshine state.
Philadelphia, PA (1899)
(11th-14th) 18.9" of snow; fourth biggest snowstorm on record. Unofficially, 44" between Philadelphia and Atlantic City. Blizzard conditions and high winds and bitter cold. | http://www.accuweather.com/en/weather-news/icebergs-still-a-threat-100-ye/63626 |
4.40625 | Great Vowel Shift
Perhaps the biggest single change in English pronunciation happened during the transition from Middle English to Modern English. Linguists call this the Great Vowel Shift. The shift began c. 1300 and continued through c. 1700, with the majority of the change occurring in the 15th and 16th centuries. So the language of Chaucer is largely pre-shift and the language of Shakespeare is largely post-shift, although the changes were underway before Chaucer was born and continued on after Shakespeare had died.
During the Great Vowel Shift, English speakers changed the way they pronounced long vowels. Before the shift, English vowels were pronounced in much the same way that they are spoken in modern continental European languages. After the shift, they had achieved their modern phonological values.
For example, a Middle English speaker would pronounce the long e in sheep as we pronounce the word shape today. Fine was pronounced fien (as in fiend). Sea was pronounced like the modern say.
The difference can be seen in similar words that enter English before and after the shift. For example, the long i sound in polite, a word which makes its appearance c. 1450 from the Latin politus, meaning polished, burnished, cultivated underwent the shift. The similarly constructed police, which was borrowed a hundred years later from the Medieval Latin politia, missed the shift and is pronounced in the traditional, continental manner.
Okay, so what exactly changed in the shift? Basically it was a shift in the position of the tongue when the long vowels were pronounced. The shift didn’t affect short vowels, which we pronounce today much like Chaucer did. But with long vowels, the tongue moved up in the mouth and either further forward or back. If you pronounce the letters A and E several times in succession, you can feel your tongue moving forward and back in your mouth. A is a fronted vowel; it is pronounced with the tongue forward. E is a backed vowel, pronounced with the tongue back. After the shift, fronted vowels were pronounced with the tongue higher and further forward than previously and backed vowels were pronounced higher and further back. And the long i and long u sounds, which prior to the shift were already pronounced with the tongue high, became diphthongs—vowel sounds with two distinct elements.
Also, in some words the long vowels e and o shifted to become short vowels, especially in compounds. Hence, we have the difference in pronunciation between bone, from the Old English ban, and bonfire, which is literally a bone fire or funeral pyre. Scotland retained the original pronunciation longer, spelling it bane-fire until c. 1800. Similarly the long e in sheep shortened in the word shepherd.
Now the shift wasn’t completely consistent. Some words resisted the change. Sea shifted, but great and break didn’t. As a result ea has two distinct sounds in English.
The shift wasn’t consistent over different regions either. The shift occurred later and was weaker as you moved northward into the North of England and into Scotland. So Scottish pronunciation is closer to Chaucer’s than is the modern London accent.
Copyright 1997-2016, by David Wilton | http://www.wordorigins.org/index.php/site/comments/great_vowel_shift/ |
4.03125 | 3 Answers | Add Yours
A Punnett square is a tool used in Mendelian inheritance to show the possible genotypes that are formed when a male and female gamate unite. It can also be used to predict the most likely phenotype of how the trait will be expressed.
A Punnett square is made by drawing a square divided into four smaller squares (2x2). The male's allele is written across the top. The female's allele is written down the side. In each of the four smaller squares is written the combination of one male allele and one female allele.
I have attached a link to a picture of a Punnett square.
A Punnett square is something used to find out how traits are passed down. Punnett squares were the result of the work by Mendel on pea plants and he realized many different things about how traits pass down. An example: Say you want to cross a plant that is tall to one that is short. The short trait is recessive in pea plants and the tall trait is dominant. (Remember that the dominant trait will show in the plant over the recessive, it only needs one allele from a parent, while the recessive can only show when it's totally recessive.) Since the short trait is recessive, we know that the short plant must have two short alleles from its "parents". The tall plant could have two tall, or one tall and one short allele, since the tall trait is dominant. I'll show both examples.
First, let's look at a short plant, which would have two short alleles, this is also called homozygous recessive. We'll use lower case t to mean the recessive trait. And we'll cross it with a tall plant that is homozygous dominant, or it has two tall alleles. We'll use upper case T to mean the dominant trait. So we make a table separating the two alleles from both plants, one across the top and one down the left side. We fill in the table by using the allele at the top of each column in each spot below it and using the allele at the left of the row in each spot to the right.
t Tt Tt
t Tt Tt
Sorry, you'll have to imagine lines in between the columns and rows. What you end up with is what would be the genotypes (the two alleles) of the progeny (the children). When you cross this example, each progeny ends up with Tt, which means it has one tall and one short allele (this is called heterozygous). Again, since tall is dominant, the plants will look tall (their phenotypes).
#2 Now let's cross the short plant (homozygous recessive tt) with the tall plant that has one tall and one short allele (heterozygous Tt).
t Tt tt
t Tt tt
This time, half of the progeny (children) will have the Tt genotype (heterozygous), while the other half have the tt genotype (homozygous recessive). The Tt half will look tall since tall is dominant, while the tt half will look short.
Just like all probability, these numbers aren't exact in real life, but the more you experiment with, the closer it gets to those numbers.
Hopefully this gives you a good start.
A Punnett square is used to discover what the organism will turn out to be when two organisms are "crossed". This method started with the monk Gregor Mendel and his pea pod crossing and is still used as a method today.
If you're wondering how to use one, it's pretty simple. Let's use XX as the dominant trait and xx as the recessive trait. First you need to know that dominant is greater than recessive.
1) You draw a square and divided into four.
2) You put one "X" on the top left hand side and one "X" on the left side.
3) You put one "x" on top right hand sideand one "x" on the left side.
4) In the first box, XX will go in.That is a dominant trait.
5) In the second box (the box next to the first) , Xx will go in. That is a dominant trait.
6) In the third box, Xx will go in. That is a dominant trait.
7) In the last box, xx will go in. That is a recessive trait.
8) If you wanted to put that in percentage, It would be 75 percent dominant and 25 percent recessive.
Hope this helps!!! :D
We’ve answered 301,356 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/what-punnett-square-how-used-270421 |
4.15625 | the ratio of an impressed charge on a conductor to the corresponding change in potential.
the ratio of the charge on either conductor of a capacitor to the potential difference between the conductors.
the property of being able to collect a charge of electricity. Symbol: C
Capacitance is the ability of a body to store an electrical charge. A material with a
large capacitance holds more electric charge at a given voltage, than one with ...
Introduction to the capacitance of a two place capacitor.
A basic overview of capacitors and capacitance. By David Santo Pietro.
Capacitance is typified by a parallel plate arrangement and is defined in terms of
charge storage: where. Q = magnitude of charge stored on each plate.
The capacitance of flat, parallel metallic plates of area A and separation d is ... F,
is the SI unit for capacitance, and from the definition of capacitance is seen to ...
Electronics Tutorial about Capacitance and Charge on a Capacitors Plates and
how the Charge affects the Capacitance of a Capacitor.
Capacitance is the ability of a component or circuit to collect and store energy in
the form of an electrical charge.
Nov 13, 2015 ... This section of the Electricity and Magnetism Primer provides a thorough
discussion of electrical capacitance. It contains several Interactive ...
Capacitance and Dielectrics. 5.1 Introduction. A capacitor is a device which
stores electric charge. Capacitors vary in shape and size, but the basic | http://www.ask.com/web?q=capacitance&qsrc=8 |
4.03125 | by Chris Woodford. Last updated: December 27, 2015.
Does it feel boiling hot to you today or is it just me? And how we can tell? If I say today's hotter than yesterday and you disagree, how can we settle the argument? One easy way is to measure the temperature with a thermometer on both days and compare the readings. Thermometers are simple scientific instruments based on the idea that metals change their behavior in a very precise way as they get hotter (gain more heat energy). Let's take a closer look at how these handy gadgets work.
Photo: Now that's what I call cold! This dial (pointer) thermometer shows the temperature inside my food freezer:around −30°C (inner scale) or −25°F (outer scale). It's exactly the same temperature, but measured in two slightly different ways.
The simplest thermometers really are simple! They're just very thin glass tubes filled with a small amount of mercury—a rather special metal that's a liquid at ordinary, everyday temperatures. When mercury gets hotter, it expands (increases in size) by an amount that's directly related to the temperature. So if the temperature increases by 20 degrees, the mercury expands and moves up the scale by twice as much as if the temperature increase is only 10 degrees. All we have to do is mark a scale on the glass and we can easily figure out the temperature.
How do we figure out the scale? Making a Celsius (centigrade) thermometer is easy, because it's based on the temperatures of ice and boiling water. These are called the two fixed points. We know ice has a temperature close to 0°C while water boils at 100°C. If we dip our thermometer in some ice, we can observe where the mercury level comes to and mark the lowest point on our scale, which will be roughly 0°C. Similarly, if we dip the thermometer in boiling water, we can wait for the mercury to rise up and then make a mark equivalent to 100°C. All we have to do then is divide the scale between these two fixed points into 100 equal steps ("centi-grade" means 100 divisions) and, hey presto, we have a working thermometer!
Photo: A mercury thermometer marked with a Fahrenheit scale. It's named for German physicist Daniel Fahrenheit (1686–1736), who made the first mercury thermometer in the early 18th century. The Celsius scale is named for the Swedish scientist who devised it, Anders Celsius (1701–1744). Photo by David McLeod courtesy of Defense Imagery.
Not all thermometers work this way, however. The one shown in our top photo has a metal pointer that moves up and down a circular scale. Open up one of these thermometers and you'll see the pointer is mounted on coiled piece of metal called a bimetallic strip that's designed to expand and bend as it gets hotter (see our article on thermostats to find out how it works). The hotter the temperature, the more the bimetallic strip expands, and the more it pushes the pointer up the scale.
Artwork: How a dial thermometer works: This is the mechanism that powers a typical dial thermometer, illustrated in a patent by Charles W. Putnam from 1905. At the top, we have the usual pointer and dial arrangement. The bottom artwork shows what's happening round the back. A bimetallic strip (yellow) is tightly coiled and attached both to the frame of the thermometer and the pointer. It's made up of two different metals bonded together, which expand by different amounts as they heat up. As the temperature changes, the bimetallic strip curves more or less tightly (contracts or expands) and the pointer, attached to it, moves up or down the scale. Artwork from US Patent 798,211: Thermometer courtesy of US Patent and Trademark Office.
Photo: Here's the coiled bimetallic strip from an actual dial thermometer (the freezer thermometer in our top photo). It's easy to see how it works: if you turn the pointer with your hand toward colder temperatures, the coiled strip tightens up; turn the pointer the other way and the strip loosens.
One problem with mercury and dial thermometers is that they take a while to react to temperature changes. Electronic thermometers don't have that problem: you simply touch the thermometer probe onto the object whose temperature you want to measure and the digital display gives you an instant temperature reading.
Electronic thermometers work in an entirely different way to mechanical ones that use lines of mercury or spinning pointers. They're based on the idea that the resistance of a piece of metal (the ease with which electricity flows through it) changes as the temperature changes. As metals get hotter, atoms vibrate more inside them, it's harder for electricity to flow, and the resistance increases. Similarly, as metals cool down, the electrons move more freely and the resistance goes down. (At temperatures close to absolute zero, the lowest theoretically possible temperature of −273.15°C or −459.67°F, resistance disappears entirely in a phenomenon called superconductivity.)
An electronic thermometer works by putting a voltage across its metal probe and measuring how much current flows through it. If you put the probe in boiling water, the water's heat makes electricity flow through the probe less easily so the resistance goes up by a precisely measurable amount. A microchip inside the thermometer measures the resistance and converts it into a measurement of temperature.
The main advantage of thermometers like this is that they can give an instant reading in any temperature scale you like—Celsius, Fahrenheit, or whatever it happens to be. But one of their disadvantages is that they measure the temperature from moment to moment, so the numbers they show can fluctuate quite dramatically, sometimes making it difficult to take an accurate reading.
Photo: A compact electronic medical thermometer. You put the metal probe (left) in your mouth, or somewhere else on your body, and read the temperature off the LCD display.
Measuring extreme temperatures
If you want to measure something that's too hot or cold for a conventional thermometer to handle, you'll need a thermocouple: a cunning device that measures temperature by measuring electricity. And if you can't get close enough to use even a thermocouple, you could try using a pyrometer, a kind of thermometer that deduces the temperature of an object from the electromagnetic radiation it gives off. | http://www.explainthatstuff.com/thermometers.html |
4.03125 | |This article does not cite any sources. (December 2007)|
Hydrology and geology
A high gradient indicates a steep slope and rapid flow of water (i.e. more ability to erode); whereas a low gradient indicates a more nearly level stream bed and sluggishly moving water, that may be able to carry only small amounts of very fine sediment. High gradient streams tend to have steep, narrow V-shaped valleys, and are referred to as young streams. Low gradient streams have wider and less rugged valleys, with a tendency for the stream to meander.
A stream that flows upon a uniformly erodible substrate will tend to have a steep gradient near its source, and a low gradient nearing zero as it reaches its base level. Of course, a uniform substrate would be rare in nature; hard layers of rock along the way may establish a temporary base level, followed by a high gradient, or even a waterfall, as softer materials are encountered below the hard layer.
On topographic maps, stream gradient can be easily approximated if the scale of the map and the contour intervals are known. Contour lines form a V-shape on the map, pointing upstream. By counting the number of lines that cross a certain segment of a stream, multiplying this by the contour interval, and dividing that quantity by the length of the stream segment, one obtains an approximation to the stream gradient.
Because stream gradient is customarily given in feet per 1000 feet, one should then measure the amount a stream segment rises and the length of the stream segment in feet, then multiply feet per foot gradient by 1000. For example, if one measures a scale mile along the stream length, and counts three contour lines crossed on a map with ten-foot contours, the gradient is approximately 5.7 feet per 1000 feet, a fairly steep gradient.
|This article about geography terminology is a stub. You can help Wikipedia by expanding it.| | https://en.wikipedia.org/wiki/Stream_gradient |
4.03125 | Soils that formed on the Earth's surface thousands of years ago and that are now deeply buried features of vanished landscapes have been found to be rich in carbon, adding a new dimension to our planet's carbon cycle.
The finding, reported today (May 25, 2014) in the journal Nature Geoscience, is significant as it suggests that deep soils can contain long-buried stocks of organic carbon which could, through erosion, agriculture, deforestation, mining and other human activities, contribute to global climate change.
An eroding bluff on the US Great Plains reveals a buried, carbon-rich layer of fossil soil. A team of researchers led by UW-Madison Assistant Professor of geography Erika Marin-Spiotta has found that buried fossil soils contain significant amounts of carbon and could contribute to climate change as the carbon is released through human activities such as mining, agriculutre and deforestation.
Credit: Jospeh Mason
"There is a lot of carbon at depths where nobody is measuring," says Erika Marin-Spiotta, a University of Wisconsin-Madison assistant professor of geography and the lead author of the new study. "It was assumed that there was little carbon in deeper soils. Most studies are done in only the top 30 centimeters. Our study is showing that we are potentially grossly underestimating carbon in soils."
The soil studied by Marin-Spiotta and her colleagues, known as the Brady soil, formed between 15,000 and 13,500 years ago in what is now Nebraska, Kansas and other parts of the Great Plains. It lies up to six-and-a- half meters below the present-day surface and was buried by a vast accumulation of windborne dust known as loess beginning about 10,000 years ago, when the glaciers that covered much of North America began to retreat.
The region where the Brady soil formed was not glaciated, but underwent radical change as the Northern Hemisphere's retreating glaciers sparked an abrupt shift in climate, including changes in vegetation and a regime of wildfire that contributed to carbon sequestration as the soil was rapidly buried by accumulating loess.
"Most of the carbon (in the Brady soil) was fire derived or black carbon," notes Marin-Spiotta, whose team employed an array of new analytical methods, including spectroscopic and isotopic analyses, to parse the soil and its chemistry. "It looks like there was an incredible amount of fire."
The team led by Marin-Spiotta also found organic matter from ancient plants that, thanks to the thick blanket of loess, had not fully decomposed. Rapid burial helped isolate the soil from biological processes that would ordinarily break down carbon in the soil.
Such buried soils, according to UW-Madison geography Professor and study co-author Joseph Mason, are not unique to the Great Plains and occur worldwide.
The work suggests that fossil organic carbon in buried soils is widespread and, as humans increasingly disturb landscapes through a variety of activities, a potential contributor to climate change as carbon that had been locked away for thousands of years in arid and semiarid environments is reintroduced to the environment.
The element carbon comes in many forms and cycles through the environment – land, sea and atmosphere – just as water in various forms cycles through the ground, oceans and the air. Scientists have long known about the carbon storage capacity of soils, the potential for carbon sequestration, and that carbon in soil can be released to the atmosphere through microbial decomposition.
The deeply buried soil studied by Marin-Spiotta, Mason and their colleagues, a one-meter-thick ribbon of dark soil far below the modern surface, is a time capsule of a past environment, the researchers explain. It provides a snapshot of an environment undergoing significant change due to a shifting climate. The retreat of the glaciers signaled a warming world, and likely contributed to a changing environment by setting the stage for an increased regime of wildfire.
"The world was getting warmer during the time the Brady soil formed," says Mason. "Warm-season prairie grasses were increasing and their expansion on the landscape was almost certainly related to rising temperatures."
The retreat of the glaciers also set in motion an era when loess began to cover large swaths of the ancient landscape. Essentially dust, loess deposits can be thick — more than 50 meters deep in parts of the Midwestern United States and areas of China. It blankets large areas, covering hundreds of square kilometers in meters of sediment.
The study conducted by Marin-Spiotta, Mason, former UW-Madison Nelson Institute graduate student Nina Chaopricha, and their colleagues was supported by the National Science Foundation and the Wisconsin Alumni Research Foundation.
—Terry Devitt, 608-262-8282, [email protected]
NOTE: A high-resolution photo to accompany this release can be downloaded at http://www.news.wisc.edu/newsphotos/soils-2014.html
Erika Marin-Spiotta | Eurek Alert!
Fossils Turn Out to Be a Rich Source of Information
09.02.2016 | Rheinische Friedrich-Wilhelms-Universität Bonn
The shield is crumbling
09.02.2016 | Friedrich-Alexander-Universität Erlangen-Nürnberg
Exceeding critical temperature limits in the Southern Ocean may cause the collapse of ice sheets and a sharp rise in sea levels
A future warming of the Southern Ocean caused by rising greenhouse gas concentrations in the atmosphere may severely disrupt the stability of the West...
Indications of light-induced lossless electricity transmission in fullerenes contribute to the search for superconducting materials for practical applications.
Superconductors have long been confined to niche applications, due to the fact that the highest temperature at which even the best of these materials becomes...
Researchers at King’s College London and the Wellcome Trust Sanger Institute in the United Kingdom have for the first time demonstrated a direct link between the Wbp2 gene and progressive hearing loss. The scientists report that the loss of Wbp2 expression leads to progressive high-frequency hearing loss in mouse as well as in two clinical cases of children with deafness with no other obvious features. The results are published in EMBO Molecular Medicine.
The scientists have shown that hearing impairment is linked to hormonal signalling rather than to hair cell degeneration. Wbp2 is known as a transcriptional...
Pollens, the bane of allergy sufferers, could represent a boon for battery makers: Recent research has suggested their potential use as anodes in lithium-ion batteries.
"Our findings have demonstrated that renewable pollens could produce carbon architectures for anode applications in energy storage devices," said Vilas Pol, an...
Automobiles increase the mobility of their users. However, their maneuverability is pushed to the limit by cramped inner city conditions. Those who need to...
09.02.2016 | Event News
02.02.2016 | Event News
26.01.2016 | Event News
09.02.2016 | Event News
09.02.2016 | Materials Sciences
09.02.2016 | Power and Electrical Engineering | http://www.innovations-report.com/html/reports/earth-sciences/buried-fossil-soils-found-to-be-awash-in-carbon.html |
4.15625 | The universe is 100 million years older than thought, according to the best-ever map of the oldest light in space.
The adjustment brings the universe's age to 13.82 billion years, and means space and time are expanding slightly slower than scientists thought.
These discoveries come from a new all-sky map of ancient cosmic light by Europe's Planck mission, which has measured what's called the cosmic microwave background in greater detail than ever before.
"Astronomers worldwide have been on the edge of their seats waiting for this map," said Joan Centrella, Planck program scientist at NASA Headquarters in Washington, in a statement. NASA contributed technology for the Planck spacecraft, which is managed by the European Space Agency. "These measurements are profoundly important to many areas of science, as well as future space missions."
The cosmic microwave background (CMB) is light dating from just 380,000 years after the Big Bang. Before that time, the universe was so hot and dense that light couldn't travel through space without getting mired in a thick plasma of protons and electrons. When the universe finally cooled and expanded enough for atoms to form, light could travel freely for the first time, and this light has been flying through the universe ever since. [Photos: Planck Sees Big Bang Relics]
Astronomers first discovered the CMB by accident in 1964, and have been studying it ever since because of the precious clues about the universe's beginnings embedded in it.
For example, though the CMB is spread throughout space, it isn't entirely uniform. It displays small variations in temperature at different spots that scientists think correspond with regions of the early universe that were slightly more or less dense with energy. These fluctuations are thought to have been the seeds that eventually caused matter to clump in the denser spots and over time evolve into galaxies and stars and planets.
The new map shows these variations in more detail than ever before, and could help scientists distinguish between different theories of how the universe began. In general, Planck's measurements are consistent with a theory called the Standard Model, which posits that the variations in the CMB were caused by tiny random quantum fluctuations. However, the new map shows tantalizing hints that physics beyond the Standard Model may be needed to fully explain the CMB.
In particular, the CMB variations don't match the Standard Model's predictions at large scales, though they do on small scales. Other odd discoveries, such as a cold spot that is much larger than expected in one area of the sky, add to this picture.
"The extraordinary quality of Planck's portrait of the infant universe allows us to peel back its layers to the very foundations, revealing that our blueprint of the cosmos is far from complete," said Jean-Jacques Dordain, director general of the European Space Agency.
The disagreements with the Standard Model are actually good news to physicists, who know they need more than this theory alone to explain the whole of the universe anyway. For instance, the Standard Model does not include any explanation for dark matter or dark energy, the two largest constituents of the universe that so far remain mysterious.
"Our ultimate goal would be to construct a new model that predicts the anomalies and links them together," said George Efstathiou of England's University of Cambridge. "But these are early days; so far, we don’t know whether this is possible and what type of new physics might be needed. And that’s exciting."
Planck launched in 2009, and the new map is the culmination of the spacecraft's first 15.5 months of observations. | http://www.space.com/20328-universe-older-planck-spacecraft-map.html |
4.3125 | |ETCSLlanguage||Sign name: LIŠ |
Sumerian is a long-extinct language documented throughout the ancient Middle East, in particular in the south of modern Iraq. It is arguably the first language for which we have written evidence, the rival candidate being ancient Egyptian. This evidence is spread over more than 3,000 years, the first sources dating to the late fourth millennium BCE and the last to the first century AD. When Sumerian ceased to be spoken is difficult to determine; according to some estimates this took place during the early second millennium BCE. Afterwards it was an élite language, used only in royal, ritual and scholarly contexts, the first two of which often overlapped.
Sumerian is a language isolate, that is no languages related to it have so far been convincingly identified, although many of its grammatical features are attested in other living languages outside of the Indo-European family to which English belongs.
The other language for which we have extensive written evidence in the ancient Middle East is Akkadian. Being a Semitic language with modern counterparts, Akkadian is much better understood than Sumerian. Part of the evidence for both languages consists of lexical lists, some indicating how cuneiform signs were pronounced and others giving Akkadian equivalents to Sumerian words. Consequently our reconstruction of the sounds and meanings of different cuneiform signs has been much influenced by our understanding of Akkadian. The word Sumerian is itself an anglicisation of an Akkadian term, Šumeru, the language's users referring to it instead as Emegir, possibly meaning 'native tongue'.
Inevitably uncertainties remain in our reconstruction of Sumerian. A further problem in describing the language is that it varied across both space and time. The following account aims to be no more than an introduction to some of Sumerian's basic grammatical features, following the conventional hierarchy of its phonemes (smallest sound units), morphemes (smallest grammatical units), words, phrases (groups of related words without a finite verb) and clauses (groups of related words with a finite verb).
Sumerian is thought to have had eight vowel sounds: short and long a, e, i, and u . Vowel length is, however, not indicated in transliteration, that is the sign-by-sign representation of the cuneiform. (Throughout this introduction, italics are used to refer to Sumerian sounds, and bold to refer to signs.)
Fifteen consonants are usually recognised in transliterating the language: b, d, g, ĝ (as in sing), ḫ (as in loch), k, l, m, n, p, r, s, š (as in ship), t and z.
Two adjacent vowels typically contract. For various reasons, including instances in which such contraction does not occur, Sumerian has been argued to have further, weak, consonants, in particular h, y and ’ (a glottal stop). These are again not specified in transliteration.
In addition, an early stage of the language is thought to have had a further consonant whose identity remains uncertain. In the period of the corpus this consonant appears to have merged in some contexts with d and in others with r, or sometimes simply to have been dropped.
The other consonants found in the corpus (and in the sign list) occur in non-Sumerian names. All of these alphabetic representations need to be regarded as approximations.
The term morpheme is used to refer to the smallest grammatical units in a language. Most linguists distinguish between bases and affixes, an English example being runs in which run is a base and s is an affix. As this example indicates a base morpheme can correspond to the higher grammatical category of the word.
Many linguists also recognise a class of morphemes, referred to as clitics, which are indeterminate on a continuum between affixes and words, English examples being 'm in I'm and 's in the man's dog. The term bound morpheme is then used to include both affixes and clitics. One criterion for identifying a clitic is that it is indifferent to the class of the word to which it attaches, as in, for example, the man's dog , the man who was running's dog and the man who was shouting loudly's dog. A further characteristic of 's is that it is always phrase-final, that is it always occurs at the end of a noun phrase.
Unsurprisingly the grammatical function of an affix in one language can be performed by a clitic or word in another, some of the functions of the English preposition to, for example, being performed by a dative affix in Latin. In Sumerian the functional equivalents to prepositions (and postpositions in some other languages) are referred to as case markers. Like English 's they are always phrase-final and are indifferent to the class of the word to which they are attached. They are thus also classifiable as clitics, or more precisely as enclitics, indicating that they lean backwards onto their host, the word to which they are phonologically bound.
The plural of certain nouns in Sumerian is indicated by a morpheme, called a plural marker, which also behaves in a similar way. The principal other enclitics in the language are the determiners, morphemes which modify a noun and typically also have a corresponding pronominal form (an English example being that which is a determiner in that book but a pronoun in give me that ). In addition, as in English the verb me 'to be' occurs both as a word and in enclitic forms.
In the transliteration conventions followed by the corpus most enclitics are preceded by a hyphen linking them to their host. The exceptions are the demonstrative and indefinite determiners, although bi, which functions both as a demonstrative ('this') and as a possessive ('its, their'), is preceded by a hyphen in both functions.
Because so many bound morphemes are enclitics, rather than affixes as in a language like Latin, morphological change in most Sumerian word classes is limited. Many word classes are morphologically invariant, and for nouns and adjectives variation is restricted to base-reduplication. Verbs are the striking exception: these can occur in highly complex affixed forms which also feature base-reduplication.
The minor word classes in Sumerian are numbers, conjunctions, interjections, adverbs, adjectives and circumpositions (functional equivalents to such complex English prepositions as behind and in front of ), as well as related sets of pronouns (personal, demonstrative, indefinite, interrogative and reflexive) and determiners (possessive, demonstrative and an indefinite). Unlike English, Sumerian has no definite or indefinite article. The primary word classes are nouns and verbs.
Sumerian nouns can be subcategorised into two classes on the basis of gender, the distinction being between human nouns (referring to people and deities) and non-human nouns (referring to animals and inanimates). This is a semantically based distinction to which there are some socially conditioned exceptions, saĝ 'slave', for example, sometimes being construed as a non-human noun. In addition, animals and inanimates can be personified in literary compositions and thus construed as human nouns.
This gender distinction is only morphologically apparent in most parts of the language's third person pronominal system (first and second person reference necessarily being solely human). It is also syntactically apparent in restrictions on how the case markers and the plural marker are used.
Only a noun phrase whose head (grammatically dominant word) is a human noun can contain a plural marker, non-human nouns consequently being indeterminate in terms of number. However, this plural marker appears to have an individualising force and if reference is to a group of people or deities it is omitted, that is the noun is construed as if it were non-human. This is particularly the case for nouns with a group meaning, such as erin2 'troop'. Similarly non-human pronominal morphemes can be used to refer to groups of people or deities. (As far as we can judge some Sumerian signs were pronounced in the same way. The subscript numeral in the earlier example is a modern convention to associate a sound sequence with a particular sign. So, for example, du, a form of the verb 'to go', was written with the sign referred to as DU, while du3 'to erect' was written with the sign referred to as KAK.)
While the plural marker is restricted to human nouns, base-reduplication occurs in both classes of noun, appearing to express a form of totality (all).
Along with verbs, nouns are the principal open word class, that is the class of word most likely to form new members. New nouns are primarily formed by compounding, such as dub-sar 'scribe' (literally '(someone) tablet writing').
A particular characteristic of verbs is their ability to distinguish tense and/or aspect, that is to locate an action in time or to express its quality in some way. They can be subcategorised in terms of their semantics, syntactic requirements and regularity. In addition, they have different finite and non-finite forms.
Reconstructing the tense and/or aspect system of Sumerian verbs is difficult. Most scholars agree that the primary distinction is between a completed and an uncompleted action, but differ as to whether this reflects a distinction of tense (past versus non-past) or of aspect (completive versus incompletive). Many scholars have therefore adopted instead two terms used by Akkadian grammarians, ḫamṭu ('quick') and marû ('fat') respectively. For the sake of convenience, in this introduction the primary distinction is assumed to be aspectual.
In intransitive finite verbal forms, that is those without a direct object, completive aspect is unmarked while incompletive aspect is indicated by the suffix ed immediately after the base. In languages like Latin, a person-number-gender (PNG) suffix is used to express in pronominal form the subject of the verb (as in am+o 'I love', am+as 'you love' etc.). The same applies to Sumerian intransitive verbs, the PNG suffix following ed in the case of incompletive forms.
Sumerian extends this principle to also marking the direct object in transitive verbal forms. In such verbs it is the distribution and form of these subject and direct object morphemes (consisting of prefixes to the base, suffixes after the base and some circumfixes on either side of the base) that serve to express the aspectual distinction.
The difference between finite and non-finite verbal forms is partly morphological, the latter having far fewer morphemes than the former. Among the morphemes excluded from Sumerian non-finite forms are PNG affixes, the aspectual distinction being expressed instead with an aspect suffix. Non-finite forms are more nuanced and have stronger temporal connotations than finite forms, distinguishing between completive (marked with a and having past reference), habitual (zero-marked (Ø) and having present reference), and incompletive (marked with ed(a) and having non-past reference). The only other affix non-finite forms can have is a prefix nu expressing negation. The non-finite forms function as verbal adjectives (participles) and nouns (gerunds), and in non-finite relative and adverbial clauses (for example, of purpose and time).
In terms of semantics Sumerian verbs can be divided into two classes, stative verbs that refer to persisting states or situations, such as pel 'to be defiled', and dynamic verbs which refer to an action or process, such as ra 'to beat'. In the same way as English stative verbs are excluded from progressive aspect (I am knowing Sumerian, for example, being an unacceptable clause), Sumerian stative verbs are excluded from incompletive aspect. Consequently only context indicates whether they have past or non-past reference. In non-finite verbal forms the distribution of stative verbs appears to be syntactic, intransitive verbs occurring in completive aspect and transitive ones in habitual aspect. All verbs can have reduplicated bases: in stative instances this expresses intensity, and in dynamic instances iterativity.
However, this semantic distinction is less of one of classes and more one of usages, some verbs occurring in both classes. For example, pel can also be used dynamically to mean 'to make something be defiled', that is 'to defile' something, in which meaning it can occur in incompletive aspect.
In terms of syntactic requirements Sumerian verbs can be divided into four principal classes: intransitive verbs which require no object, such as uš2 'to die'; extended intransitive verbs which require a non-direct object, such as kur9 'to enter' into a place; transitive verbs which require only a direct object, such as du3 'to erect' something; and, finally, extended transitive verbs which require both a direct and a non-direct object, such as ĝar 'to place' something on something.
Again, however, this is more a distinction in usages rather than classes, some verbs occurring both intransitively and transitively. For example, uš2 can be used transitively with the meaning 'to kill' ('to make someone die').
This ability of a verb to determine its syntactic environment is sometimes referred to as its valency, and the non-direct object as a complement, as distinct from an adjunct, a phrase that can be easily deleted from a clause (in London being a complement in he lives in London but an adjunct in he bought a house in London).
A further syntactic class has only one member, me 'to be'. While the English verb to be has both a copular (linking) and a locational function, me has only a copular function, location being expressed instead by the verb ĝal2. Sumerian me conjugates like an intransitive stative verb and consequently is never found in incompletive forms. It differs, however, in that it always requires what is referred to as a predicative complement, such as a noun or an adjective, which refers back to the subject of the verb (as in he is the king and he is handsome). The copular verb occurs both as a word and as an enclitic attached to its predicative complement. It also has various abbreviated forms, such as the third-person negative nu, which are arguably also enclitics.
In terms of regularity Sumerian verbs can be divided into four classes: regular verbs (the majority) which have the same base regardless of aspect and/or number (such as ra 'to beat'); suppletive verbs which have a different base depending on aspect and/or number (such as singular tuš 'to sit' but plural durun); reduplicating verbs which have a (partly) reduplicated base in incompletive aspect (such as completive and habitual ĝar 'to place' but incompletive ĝa2-ĝa2); and a small class of extending verbs which have a base extended with a consonant in incompletive aspect (such as completive and habitual te 'to approach' but incompletive teĝ3).
Expansion of the class of verbs is primarily by multiword constructions in which a nominal element and a verb combine in a semantic unit. The nominal element is typically a noun, in particular a body-part, functioning as the verb's direct object. These multiword constructions can be divided into two types. In one the verb is dug4 'to say' or ak 'to do' and the semantic load is carried by the nominal element, an example being šu dug4 'to tend' (literally 'to express the hand', although the Sumerian word order is the reverse of the English). A wider range of verbs occurs in the other subclass and the semantic load is more evenly spread, an example being ĝiš tag 'to sacrifice' something (literally 'to touch wood' to something, again in the reverse order). The high incidence of multiword constructions in Sumerian means that it has many more non-direct objects than a language such as English.
At the level of the noun phrase Sumerian is left-headed, that is the noun which is the head of the phrase occurs at its beginning. In outline the sequence is noun, modifier(s), determiner, plural marker and then case marker. However, a few adjectives, kug 'shining' and gal 'big', sometimes precede the noun. The plural marker only occurs in a phrase with a human noun as its head. And the indefinite and most demonstrative determiners do not occur with modifiers.
Like English 's the case markers are always phrase-final. And in the same way that 's is sometimes called a genitive, they are referred to with the same types of label as are used for case affixes in other languages. The case markers can be divided into three groups. All except one typically indicate the syntactic role which a phrase plays in relation to a verb in a clause, consequently being described as adverbal. Those adverbal case-markers that are functionally equivalent to English prepositions can be termed non-core and those that mark the subject and any direct object of the verb as core. The final case marker, the genitive, is adnominal only, that is it functions only to indicate a relationship between noun phrases.
The non-core adverbal case markers include the ablative (ta 'from'), allative (še 'to(wards)'), comitative (da 'with'), dative (ra, 'to/for'; restricted to phrases with a human noun as the head), directive (e 'in(to) contact with'; restricted to phrases with a non-human noun as the head), and locative (a 'in(to)'; again restricted to phrases with a non-human noun as the head). These case markers occur at the end of phrases that can be complements or adjuncts, depending on the valency of the verb in the clause. A similar but more nuanced set of morphemes is incorporated in finite verbal forms.
Two further non-core adverbal case markers are only used to express adjuncts and have no equivalent morpheme in finite verbs, the adverbiative (eš'in the manner of') and the similative (gin 'like').
The two final adverbal case markers have a more grammatical, core function. Most languages have a strategy for distinguishing the subject of a transitive verb from its direct object. In English this is done mainly by word order, although a case system still operates in pronouns (he hates him). This distinction made, different languages mark the subject of an intransitive verb in different ways. In English both subjects are marked in the same way (he runs). This is referred to as nominative-accusative alignment. However, in Sumerian noun phrases the subject of an intransitive verb is marked in the same way as the direct object of a transitive verb. This is referred to as ergative-absolutive alignment, the subject of a Sumerian transitive verb being marked by an ergative case marker e (morphologically the same as the directive from which it possibly derives), and the intransitive subject and transitive direct object being zero-marked with what is termed the absolutive case marker.
However, ergative-absolutive alignment applies only in noun phrases which have a noun as their head. Noun phrases with a personal pronoun as their head can be regarded as nominative-accusative in alignment because they have the same zero case marking regardless of the transitivity of the verb whose subject they are. Sumerian is thus one of many languages which have a syntactic split determined by the class of the word functioning as the head of the noun phrase.
The genitive case marker, ak 'of', is adnominal only and consequently also has no equivalent morpheme in finite verbal forms. Typically a genitive noun phrase occurs embedded within another noun phrase ending with an adverbal case marker:
As this example indicates there is not always a one-one correspondence in the writing system between bound morpheme and sign, the genitive ak here being written across two signs, and the locative a as part of one sign. (The abbreviations used in the morphemic analysis of the Sumerian are explained at the end of this introduction.)
However, in the same way as an English direct object can be shifted to the beginning of a clause and its original position marked by a pronoun (snakes, I hate them), so too can a Sumerian genitive noun phrase be front-shifted, a possessive determiner then marking its original position:
In this example a is the reduced form of the genitive that occurs when it is not followed by a vowel.
Because the genitive is the only solely adnominal case marker its semantic field covers much more than possession, also being used, for example, to express a location:
This example indicates another characteristic of the writing system, the g which precedes the genitive arguably having no phonological significance but simply being a graphic resumption of the preceding consonant.
At the level of the clause Sumerian is right-headed, that is the verb which is the head of the clause occurs at its end, the typical sequence being subject, object and then verb. However, because a finite verbal form includes PNG affixes expressing in pronominal form the core functions of the subject and any direct object of a verb, a Sumerian clause can consist of only a finite verb.
In addition to the core PNG affixes, a finite verbal form can include further prefixes expressing a complement or an adjunct of a verb in pronominal form. These morphemes consist of a set of non-core PNG prefixes and a set of 'case' prefixes that are related to the non-core case markers. Just as a case marker follows a noun in a noun phrase, so the 'case' prefixes are postpositional to the PNG prefix. More than one 'case' prefix can occur in a finite verbal form, but only the first can be preceded by a PNG prefix. There are, however, some restrictions on the occurrence of the non-core PNG prefix:
In this example a further prefix, ba (whose functions are described in more detail below), excludes the presence of a non-human PNG prefix before the ablative ra (its form after a vowel, being ta after a consonant).
A clause of this type can be expanded to include noun phrases:
In an expanded clause the core PNG affixes are always retained in the verb. The non-core PNG prefix and 'case' prefixes are, however, sometimes omitted.
The further prefixes possible in a finite verbal form include middle ba, ventive m(u) and a vowel-initial prefix. In outline a finite verb can consequently have the following structure: vowel, ventive, middle, non-core PNG, 'case(s)', core PNG, (reduplicated) base, aspect, core PNG.
The middle prefix ba is a single morpheme which is ambiguous with a sequence of two morphemes from which it arguably derives, that is the non-core non-human PNG prefix b ('it, them') and the dative prefix a ('to'). If the argument is correct, it helps to explain the absence of the non-core non-human PNG prefix in the previous examples, the etymology of the middle prefix excluding repetition of one of the prefixes from which it derives. Middle ba can, however, be followed by a non-core human PNG prefix.
The middle prefix is restricted to completive aspect. Its range of functions, some of which remain unclear, is broader than the conventional term middle suggests.
With stative verbs it has an inchoative function, that is it expresses the coming into existence of a state: ba-an-tuku 'he married her' (literally 'he came to have her'). In this and the following examples both the third person transitive direct object and the third person intransitive subject are zero-marked in the verb, a further example of ergative-absolutive alignment in Sumerian.
With dynamic verbs ba is used in particular when the subject of the verb is affected by the action of the verb. Consequently it functions as an invariant reflexive indicating that the endpoint of the verb's action is the same entity as the subject of the verb: ba-an-zuḫ 'he stole it (for himself)'. By extension it was also used to form the equivalent to the English middle voice in which the subject of the verb no longer has an agentive role but continues to be affected by the action of the verb: ud ba-bur2 'the weather improved'. Expressing such agent-less or spontaneous events is often referred to as an anticausative function (although in the external world this type of event obviously does have a cause). The functional range of ba was extended still further to include forming the equivalent to the English passive, that is to non-spontaneous events which consequently do have an implied agent: ba-ḫul 'it was destroyed'.
The ventive (or cislocative) prefix m(u) has a more restricted range of functions. It can be regarded as orienting a verb towards the speaker or narrator and can occur in both completive and incompletive aspects.
The vowel-initial prefixes are i and a(l). The former has no semantic function but is used with dynamic verbs before two consonants, and before an otherwise unprefixed verbal base or core PNG prefix; a(l) performs a similar function with stative verbs. However, a(l) is also used with dynamic verbs, in which occurrences it has a semantic function. In completive aspect this includes expressing a statal passive: al-du3 'it is built'. In incompletive aspect its functions possibly include expressing a habitual action:
However, this still leaves many incompletive instances of a(l) unaccounted for.
Clauses can be analysed in terms of their status, the basic distinction being between main clauses which can stand on their own and subordinate clauses which can't, and also in terms of their contribution to what is termed discourse function, that is, for example, whether they make a statement or express a question.
Most Sumerian clauses simply make a statement and, like the English indicative, are zero-marked. However, the same applies to closed questions, that is ones requiring only a yes or no answer, the implication being that they were signalled with a change in intonation (compare you're going out?). Open questions are signalled by an interrogative pronoun (such as a-na 'what?') or adverb (such as me-še3 'where?').
More complex types of discourse function, such as commands, prohibitions and wishes, are expressed primarily by morphological changes to the finite verb. In most cases a verb-initial prefix is added. However, for second-person positive commands the imperative is used in which what are prefixes in other verbal forms are instead suffixes (compare dites-le-moi 'tell it to me'). This is regarded as untypical behaviour for affixes and raises some doubt about where on the continuum between clitics and affixes these bound morphemes lie. Another characteristic of the imperative is that it deletes both the singular intransitive and transitive subject. It can therefore be regarded as a further example of nominative-accusative alignment in Sumerian.
These verb-initial prefixes themselves lie on a different type of continuum, one between signalling a change in verbal mood and connecting clauses. In some contexts the prefix ḫu, for example, has a clear modal function:
In other contexts it combines modality with clause-connection, forming a type of conditional, and subordinate, clause (compare the English conditional subjunctive should he give it to him, he will suffer). Other verb-initial prefixes have a more straightforward connective function, u, for example, indicating that the action expressed by its verb precedes the action expressed by the verb that follows, and thus being translatable with the English subordinating conjunction after.
Sumerian also has the three more conventional types of subordinate clause: relative clauses which modify a noun and thus occur within the noun phrase; nominal clauses which can function as, for example, the subject of a verb; and adjunct (or adverbial) clauses which are subordinate to a main clause and have, for example, a causal or a temporal function.
In English the first two of these types can be signalled by a subordinator, that , and the third by a subordinating conjunction, such as before. The functional equivalent in Sumerian to the subordinator is a verb-final suffix a. Analysis of adjunct clauses is, however, less straightforward. Sumerian has very few simple subordinating conjunctions. More often a complex construction is used which begins with a noun and ends with a case marker, the suffix a again being bound to the verb. For example, eĝer …-ta, literally 'from the back that something had happened', can be translated as after. A less literal analysis is that in such constructions the noun has been bleached of its lexical content and combines with the case marker to form a complex subordinating conjunction.
© Copyright 2003, 2004, 2005, 2006 The ETCSL project, Faculty of Oriental Studies, University of Oxford | http://etcsl.orinst.ox.ac.uk/edition2/language.php |
4.03125 | The precise Mach number at which a craft can be said to be flying at hypersonic speed varies, since individual physical changes in the airflow (like molecular dissociation and ionization) occur at different speeds; these effects collectively become important around Mach 5. The hypersonic regime is often alternatively defined as speeds where ramjets do not produce net thrust.
- 1 Characteristics of flow
- 2 Classification of Mach regimes
- 3 Similarity parameters
- 4 Regimes
- 5 See also
- 6 References
- 7 External links
Characteristics of flow
While the definition of hypersonic flow can be quite vague and is generally debatable (especially due to the lack of discontinuity between supersonic and hypersonic flows), a hypersonic flow may be characterized by certain physical phenomena that can no longer be analytically discounted as in supersonic flow. The peculiarity in hypersonic flows are as follows:
- Shock layer
- Aerodynamic heating
- Entropy layer
- Real gas effects
- Low density effects
- Independence of aerodynamic coefficients with Mach number.
Small shock stand-off distance
As a body's Mach number increases, the density behind the shock generated by the body also increases, which corresponds to a decrease in volume behind the shock wave due to conservation of mass. Consequently, the distance between the shock and the body decreases at higher Mach numbers.
A portion of the large kinetic energy associated with flow at high Mach numbers transforms into internal energy in the fluid due to viscous effects. The increase in internal energy is realized as an increase in temperature. Since the pressure gradient normal to the flow within a boundary layer is approximately zero for low to moderate hypersonic Mach numbers, the increase of temperature through the boundary layer coincides with a decrease in density. This causes the bottom of the boundary layer to expand, so that the boundary layer over the body grows thicker and can often merge with the shock wave near the body leading edge.
High temperature flow
High temperatures due to a manifestation of viscous dissipation cause non-equilibrium chemical flow properties such as vibrational excitation and dissociation and ionization of molecules resulting in convective and radiative heat-flux.
Classification of Mach regimes
Although "subsonic" and "supersonic" usually refer to speeds below and above the local speed of sound respectively, aerodynamicists often use these terms to refer to particular ranges of Mach values. This occurs because a "transonic regime" exists around M=1 where approximations of the Navier–Stokes equations used for subsonic design no longer apply, partly because the flow locally exceeds M=1 even when the freestream Mach number is below this value.
The "supersonic regime" usually refers to the set of Mach numbers for which linearised theory may be used; for example, where the (air) flow is not chemically reacting and where heat transfer between air and vehicle may be reasonably neglected in calculations.
Generally, NASA defines "high" hypersonic as any Mach number from 10 to 25, and re-entry speeds as anything greater than Mach 25. Among the aircraft operating in this regime are the Space Shuttle and (theoretically) various developing spaceplanes.
In the following table, the "regimes" or "ranges of Mach values" are referenced instead of the usual meanings of "subsonic" and "supersonic".
|Regime||Mach||mph||km/h||m/s||General plane characteristics|
|Subsonic||<0.8||<610||<980||<270||Most often propeller-driven and commercial turbofan aircraft with high aspect-ratio (slender) wings, and rounded features like the nose and leading edges.|
|Transonic||0.8-1.2||610-915||980-1,470||270-410||Transonic aircraft nearly always have swept wings that delay drag-divergence, and often feature designs adhering to the principles of the Whitcomb Area rule.|
|Supersonic||1.2-5.0||915-3,840||1,470-6,150||410-1,710||Aircraft designed to fly at supersonic speeds show large differences in their aerodynamic design because of the radical differences in the behaviour of fluid flows above Mach 1. Sharp edges, thin airfoil-sections, and all-moving tailplane/canards are common. Modern combat aircraft must compromise in order to maintain low-speed handling; "true" supersonic designs include the F-104 Starfighter and BAC/Aérospatiale Concorde.|
|Hypersonic||5.0-10.0||3,840-7,680||6,150-12,300||1,710-3,415||Cooled nickel titanium skin; highly integrated (due to domination of interference effects: non-linear behaviour means that superposition of results for separate components is invalid), small wings, see X-51A Waverider and HyperSoar.|
|High-hypersonic||10.0-25.0||7,680-16,250||12,300-30,740||3,415-8,465||Thermal control becomes a dominant design consideration. Structure must either be designed to operate hot, or be protected by special silicate tiles or similar. Chemically reacting flow can also cause corrosion of the vehicle's skin, with free-atomic oxygen featuring in very high-speed flows. Hypersonic designs are often forced into blunt configurations because of the aerodynamic heating rising with a reduced radius of curvature.|
|Ultrasonic speeds||>25.0||>16,250||>30,740||>8,465||Ablative heat shield; small or no wings; blunt shape|
The categorization of airflow relies on a number of similarity parameters, which allow the simplification of a nearly infinite number of test cases into groups of similarity. For transonic and compressible flow, the Mach and Reynolds numbers alone allow good categorization of many flow cases.
Hypersonic flows, however, require other similarity parameters. First, the analytic equations for the oblique shock angle become nearly independent of Mach number at high (~>10) Mach numbers. Second, the formation of strong shocks around aerodynamic bodies means that the freestream Reynolds number is less useful as an estimate of the behavior of the boundary layer over a body (although it is still important). Finally, the increased temperature of hypersonic flows mean that real gas effects become important. For this reason, research in hypersonics is often referred to as aerothermodynamics, rather than aerodynamics.
The introduction of real gas effects means that more variables are required to describe the full state of a gas. Whereas a stationary gas can be described by three variables (pressure, temperature, adiabatic index), and a moving gas by four (flow velocity), a hot gas in chemical equilibrium also requires state equations for the chemical components of the gas, and a gas in nonequilibrium solves those state equations using time as an extra variable. This means that for a nonequilibrium flow, something between 10 and 100 variables may be required to describe the state of the gas at any given time. Additionally, rarefied hypersonic flows (usually defined as those with a Knudsen number above 0.1) do not follow the Navier–Stokes equations.
Hypersonic flows are typically categorized by their total energy, expressed as total enthalpy (MJ/kg), total pressure (kPa-MPa), stagnation pressure (kPa-MPa), stagnation temperature (K), or flow velocity (km/s).
Hypersonic flow can be approximately separated into a number of regimes. The selection of these regimes is rough, due to the blurring of the boundaries where a particular effect can be found.
In this regime, the gas can be regarded as an ideal gas. Flow in this regime is still Mach number dependent. Simulations start to depend on the use of a constant-temperature wall, rather than the adiabatic wall typically used at lower speeds. The lower border of this region is around Mach 5, where ramjets become inefficient, and the upper border around Mach 10-12.
Two-temperature ideal gas
This is a subset of the perfect gas regime, where the gas can be considered chemically perfect, but the rotational and vibrational temperatures of the gas must be considered separately, leading to two temperature models. See particularly the modeling of supersonic nozzles, where vibrational freezing becomes important.
In this regime, diatomic or polyatomic gases (the gases found in most atmospheres) begin to dissociate as they come into contact with the bow shock generated by the body. Surface catalysis plays a role in the calculation of surface heating, meaning that the type of surface material also has an effect on the flow. The lower border of this regime is where any component of a gas mixture first begins to dissociate in the stagnation point of a flow (which for nitrogen is around 2000 K). At the upper border of this regime, the effects of ionization start to have an effect on the flow.
In this regime the ionized electron population of the stagnated flow becomes significant, and the electrons must be modeled separately. Often the electron temperature is handled separately from the temperature of the remaining gas components. This region occurs for freestream flow velocities around 10–12 km/s. Gases in this region are modeled as non-radiating plasmas.
Above around 12 km/s, the heat transfer to a vehicle changes from being conductively dominated to radiatively dominated. The modeling of gases in this regime is split into two classes:
- Optically thin: where the gas does not re-absorb radiation emitted from other parts of the gas
- Optically thick: where the radiation must be considered a separate source of energy.
The modeling of optically thick gases is extremely difficult, since, due to the calculation of the radiation at each point, the computation load theoretically expands exponentially as the number of points considered increases.
- Supersonic transport
- Lifting body
- Atmospheric entry
- Hypersonic flight
- DARPA Falcon Project
- Reaction Engines Skylon, Reaction Engines A2 (design studies)
- HyperSoar (concept)
- X-15 A Waverider
- X-20 Dyna-Soar, Rockwell X-30 (cancelled)
- Avatar RLV
- Lockheed Martin SR-72 (Planned)
- WU-14 Hypersonic Glide Vehicle (Under Development)
- Shaurya (missile) Ballistic Missile (Entered Production)
- BrahMos-II Cruise Missile (Under Development)
- 9K720 Iskander Short-range ballistic missile[Russia] in Service
- Other flow regimes
|Wikimedia Commons has media related to Hypersonics.| | https://en.wikipedia.org/wiki/Hypersonic_speed |
4.03125 | With more than 40 million people living under exceptional drought conditions in East Africa, the ability to make accurate predictions of drought has never been more important. In the aftermath of widespread famine and a humanitarian crisis caused by the 2010-2011 drought in the Horn of Africa -- possibly the worst drought in 60 years -- researchers are striving to determine whether drying trends will continue.
While it is clear that El Niño can affect precipitation in this region of East Africa, very little is known about the drivers of long-term shifts in rainfall. However, new research described in the journal Nature helps explain the mechanisms at work behind historical patterns of aridity in Eastern Africa over many decades, and the findings may help improve future predictions of drought and food security in the region.
"The problem is, instrumental records of temperature and rainfall, especially in East Africa, don't go far enough in time to study climate variability over decades or more, since they are generally limited to the 20th century," explains first author Jessica Tierney, a geologist at the Woods Hole Oceanographic Institution (WHOI). Tierney and her colleagues at WHOI and the Lamont-Doherty Earth Observatory of Columbia University used what is known as the paleoclimate record, which provides information on climate in the geologic past, to study East African climate change over a span of 700 years.
The paleoclimate record in East Africa consists of indicators of moisture balance -- including pollen, water isotopes, charcoal, and evidence for run-off events -- measured in lake sediment cores. Tierney and her colleagues synthesized these data, revealing a clear pattern wherein the easternmost sector of East Africa was relatively dry in medieval times (from 1300 to 1400 a.d.), wet during the "Little Ice Age" from approximately 1600 to 1800 a.d., and then drier again toward the present time.
Climate model simulations analyzed as part of the study revealed that the relationship between sea surface temperatures and atmospheric convection in the Indian Ocean changes rainfall in East Africa. Specifically, wet conditions in coastal East Africa are associated with cool sea surface temperatures in the eastern Indian Ocean and warm sea surface temperatures in the western Indian Ocean, which cause ascending atmospheric circulation over East Africa and enhanced rainfall. The opposite situation -- cold sea surface temperatures in the western Indian Ocean and warmer in the East -- causes drought. Such variations in sea-surface temperatures likely caused the historical fluctuations in rainfall seen in the paleorecord.
The central role of the Indian Ocean in long-term climate change in the region was a surprise. "While the Indian Ocean has long been thought of as a 'little brother' to the Pacific, it is clear that it is in charge when it comes to these decades-long changes in precipitation in East Africa," says Tierney.
Many questions remain, though. "We still don't understand exactly what causes the changes in sea surface temperatures in the Indian Ocean and the relationship between those changes and global changes in climate, like the cooling that occurred during the Little Ice Age or the global warming that is occurring now," says Tierney. "We'll need to do some more experiments with climate models to understand that better."
In the past decade, the easternmost region of Africa has gotten drier, yet general circulation climate models predict that the region will become wetter in response to global warming. "Given the geopolitical significance of the region, it is very important to understand whether drying trends will continue, in which case the models will need to be revised, or if the models will eventually prove correct in their projections of increased precipitation in East Africa," says co-author Jason Smerdon, of the Lamont-Doherty Earth Observatory.
While it's currently unclear which theory is correct, the discovery of the importance of the Indian Ocean may help solve the mystery. "In terms of forecasting long-term patterns in drought and food security, we would recommend that researchers make use of patterns of sea surface temperature changes in the Indian Ocean rather than just looking at the shorter term El Niño events or the Pacific Ocean," says Tierney.
In addition, Tierney and her colleagues lack paleoclimate data from the region that is most directly affected by the Indian Ocean -- the Horn of Africa. The paleoclimate data featured in this study are limited to more equatorial and interior regions of East Africa. With support from National Science Foundation, Tierney and her colleagues are now developing a new record of both aridity and sea surface temperatures from the Gulf of Aden, at a site close to the Horn.
"This will give us the best picture of what's happened to climate in the Horn, and in fact, it will be the first record of paleoclimate in the Horn that covers the last few millennia in detail. We're working on those analyses now and should have results in the next year or so," says Tierney.
This research was based on work supported by the National Science Foundation and the National Oceanic and Atmospheric Administration (NOAA).
Cite This Page: | http://www.sciencedaily.com/releases/2013/01/130118145354.htm |
4.3125 | Building on the first two lessons in the series, this lesson deals with savings and interest.
When choosing a place to put their money, people consider how safe there money will be, how easy it is to access, and whether it will earn more money. Students explore how well different savings places achieve these objectives. Students learn that people who don’t want to carry money with them or keep it at home often choose to put their money in a savings account at a bank or credit union. These financial institutions protect money from theft and other losses. They also pay interest on money deposited. This lesson works well as a follow-up to the ABCs of Saving.
Students will demonstrate understanding of the processes associated with banking by role- playing as customers, tellers, and guards.
The following lessons come from the Council for Economic Education's library of publications. Clicking the publication title or image will take you to the Council for Economic Education Store for more detailed information.
This interdisciplinary curriculum guide helps teachers introduce their students to economics using popular children's stories.
7 out of 29 lessons from this publication relate to this EconEdLink lesson.
This publication contains 15 lessons that complement the 3-5 Student Workbook. Specific to grades 3-5 are a variety of activities, including a guessing game using clues to identify various occupations; the story Urban Mouse and Rural Mouse which teaches students about entrepreneurs and opportunity recognition; and a role-playing activity in which students learn which method of payment is appropriate in a variety of situations.
6 out of 17 lessons from this publication relate to this EconEdLink lesson.
This publication helps elementary students analyze energy and environment issues from an economics perspective.
2 out of 10 lessons from this publication relate to this EconEdLink lesson. | http://www.econedlink.org/economic-standards/EconEdLink-related-publications.php?lid=381 |
4.0625 | |This article needs additional citations for verification. (March 2015)|
The Drake Passage (Spanish: Pasaje de Drake) or Mar de Hoces—Sea of Hoces—is the body of water between South America's Cape Horn and the South Shetland Islands of Antarctica. It connects the southwestern part of the Atlantic Ocean (Scotia Sea) with the southeastern part of the Pacific Ocean and extends into the Southern Ocean. The passage receives its English-language name from the 16th-century English privateer Sir Francis Drake. Drake's only remaining ship, after having passed through the Strait of Magellan, was blown far south in September 1578. This incident implied an open connection between the Atlantic and Pacific oceans.
Half a century earlier, after a gale had pushed them south from the entrance of the Strait of Magellan, the crew of the Spanish navigator Francisco de Hoces thought they saw a land's end and possibly inferred this passage in 1525. For this reason, some Spanish and Latin American historians and sources call it Mar de Hoces after Francisco de Hoces.
The first recorded voyage through the passage was that of Eendracht, captained by the Dutch navigator Willem Schouten in 1616, naming Cape Horn in the process.
The 800-kilometre (500 mi) wide passage between Cape Horn and Livingston Island is the shortest crossing from Antarctica to any other landmass. The boundary between the Atlantic and Pacific Oceans is sometimes taken to be a line drawn from Cape Horn to Snow Island (130 kilometres (81 mi) north of mainland Antarctica). Alternatively, the meridian that passes through Cape Horn may be taken as the boundary. Both boundaries lie entirely within the Drake Passage.
The other two passages around the extreme southern part of South America (though not going around Cape Horn as such), Magellan Strait and Beagle Channel, are very narrow, leaving little room for a ship. They can also become icebound, and sometimes the wind blows so strongly no sailing vessel can make headway against it. Hence most sailing ships preferred the Drake Passage, which is open water for hundreds of miles, despite very rough conditions. The small Diego Ramírez Islands lie about 100 kilometres (62 mi) south-southwest of Cape Horn.
There is no significant land anywhere around the world at the latitudes of Drake Passage, which is important to the unimpeded flow of the Antarctic Circumpolar Current which carries a huge volume of water (about 600 times the flow of the Amazon River) through the Passage and around Antarctica.
The passage is known to have been closed until around 41 million years ago according to a chemical study of fish teeth found in oceanic sedimentary rock. Before the passage opened, the Atlantic and Pacific Oceans were separated entirely with Antarctica being much warmer and having no ice cap. The joining of the two great oceans started the Antarctic Circumpolar Current and cooled the continent significantly.
- Francis Drake
- García de Nodal
- Strait of Magellan
- Beagle Channel
- Bransfield Strait
- Sars Bank
- Elizabeth Island
Media related to Drake Passage at Wikimedia Commons
- National Oceanography Centre, Southampton page of the important and complex bathymetry of the Passage
- A NASA image of an eddy in the Passage
- Larger-scale images of the passage from the US Navy (Rain, ice edge and wind images) | https://en.wikipedia.org/wiki/Drake_Passage |
4.21875 | Diversity: “having different forms, types, ideas” or “having people who are difference races or who have different cultures in a group.”
Sending your child from home to school opens them up to a variety of new ideas, different cultures, and peers of different races. One way to encourage, support, and prepare your child for a transition that instills openness and acceptance of differences in classroom is by using learning resources that teach diversity at home. In honor of Martin Luther King Jr. Day, we have a compiled a list of our favorite toys and books to help you promote diversity in your children from an early age:
Especially for visual learners, one of the best ways to teach diversity is through being inclusive in toy selection. These huggable fabric dolls offer a variety of different races to help celebrate diversity!
Teach children to become familiar with their emotions while instilling tolerance, empathy, and racial diversity. You will have fun asking your children to describe what they see as you discuss different features of the board characters.
Many times teaching diversity not only entails recognizing it in the community, but learning social emotional skills to know how to respond to those differences. The Learning to Get Along Book Set presents real-life situations and concrete examples for you to read-aloud and discuss ideas with your children on how to properly respond to each.
One fun way to embrace diversity is by integrating it into kitchen play! Teach dramatic play and diversity at the same time with the International Food Collection as you introduce foods from around the world! It’s a great conversation starter about different cultures.
Embracing children with special needs and varying abilities is an important part of acceptance in your child’s peer group. These puzzles are an engaging visual aid for showing real images of children with differing abilities. Use them as the perfect opportunity to discuss diversity with your kids!
We’re curious! How do you talk to your children about discovering diversity in the classroom and community? Let us know by commenting below with learning tools and questions your children ask about diversity. | http://blog.kaplantoys.com/tag/toys/ |
4.09375 | Saturn couldn’t be more different from Earth; it’s mostly made of hydrogen and helium and has nearly 100 times more mass. And those rings…
You can also check out these cool telescopes that will help you see the beauty of planet Saturn.
But Saturn’s axis is tilted, just like Earth. While Earth’s axis is tilted at an angle of 23.4°, Saturn’s tilt is 26.7°. That’s pretty close.
And just like Earth, Saturn’s axial tilt gives the planet seasons. In fact, we can see Saturn’s tilt by the position of the rings. When Saturn’s northern hemisphere is experiencing summer, we can see the rings at their widest point. And then, as Saturn works its way through its 30-year orbit around the Sun, the angle to the rings decreases until they’re almost invisible – just a line through the planet.
The changing seasons on Saturn also affect the planet’s weather patterns. NASA’s Voyager spacecraft originally clocked wind speeds near Saturn’s equator at nearly 1,500 km/h. But when Cassini showed up 15 years later, they’d slowed down to only 1,100 km/h. | http://www.universetoday.com/15399/tilt-of-saturn/ |
4.25 | West Nile Virus (WNV) is a mosquito-borne disease. It was first discovered in the West Nile District of Uganda in 1937. According to the reports from the Centers for Disease Control and Prevention, WNV has been found in Africa, the Middle East, Europe, Oceania, west and central Asia, and North America. Its first emergence in North America began in the New York City metropolitan area in 1999. It is a seasonal epidemic in North America that normally erupts in the summer and continues into the fall, presenting a threat to environmental health. Its natural cycle is bird-mosquito-bird and mammal. Mosquitoes, in particular the species Culex pipiens, become infected when they feed on infected birds. Infected mosquitoes then spread WNV to other birds and mammals including humans when they bite. In humans and horses, fatal Encephalitis is the most serious manifestation of WNV infection. WNV can also cause mortality in some infected birds.
The spread of WNV has shown unique distribution patterns in different regions [1–5]. Environmental determinants, such as the presence of suitable habitats, temperatures, and climates, play important roles in WNV dissemination in North America [6, 7]. Mosquito Culex species appear to prefer some land use and land cover (LULC) types (e.g., wetlands and specific grasslands) than some others (e.g., exposed dry soils). Mosquitoes in the canopy site are believed to possess more infections than those in subterranean areas and on the ground . Wetlands and stormwater ponds, especially those under heavy shade, provide an ideal environment for mosquito settlement. Ponds with plenty of sunshine and a shortage of vegetation are believed to be a poor environment for mosquito development . WNV dissemination is found to be significantly related to average summer temperatures from 2002 to 2004 in the USA. .
Field and laboratory records of entomological and ecological observations have been used to examine how natural environmental constraints, such as water sources and climatic parameters, contribute to the transmission of WNV [11, 12, 9]. Doham and Turell (2001) found that the infection rates of WNV in mosquitoes are lower at cooler temperatures than when these vectors were maintained at warmer temperatures. The infection rates start to increase after one day of incubation at 26°C. WNV dissemination begins more rapidly in mosquitoes settled at higher temperatures than in mosquitoes maintained at cooler temperatures . Gingrich et al. (2006) detected a bimodal seasonal distribution of mosquitoes with peaks in early and late summer in Delaware in 2004, and that mosquitoes are attracted to ponds with heavy shade and low slopes.
Remote sensing (RS) and geographic information system (GIS) technologies have been extensively applied in public health studies and related issues such as urban environmental analysis [13–15, 6, 5, 16–20]. These technologies have been applied to research diverse epidemiological issues, such as parasitic diseases and schistosomiasis using RS and GIS as exclusive sources of information for studying epidemics. The accessibility of multi-temporal satellite imagery effectively supports the study of epidemiology . Ruiz et al. (2004) found that some environmental and social factors contributed to WNV dissemination in Chicago in 2002 by using GIS technologies and multi-step Discriminant Analysis. Those factors included distance to a WNV positive dead bird specimen, the age of housing, the intensity of mosquito abatement, the presence of vegetation, geological factors, and demographic factors such as population age, income, and race. Multiple mapping techniques were compared for WNV dissemination in the continental USA . The results indicated that each mapping technique emphasized certain WNV risk factor(s) due to the differences in modeling assumptions, statistic treatment, and error determination. There was no single model performing better than all others. Cooke III et al. (2006) estimated WNV risk in the state of Mississippi based on human and bird cases recorded in 2002 and 2003 with the creation of avian GIS models. The results indicate that high road density, low stream density, and gentle slopes contributed to the dissemination of WNV in Mississippi. GIS and spatial-time statistics were applied for a risk analysis of the 2002 equine WNV epidemic in northeastern Texas . A total of nine non-random spatial-temporal equine case aggregations and five high-risk areas were detected in the study area. Ruiz et al. (2007) further examined the association of WNV infection and landscapes in Chicago and Detroit using GIS and statistical analysis. Their results show that higher WNV case rates occurred in the inner suburbs where housing ages were around 48–68 years old with moderate vegetation cover and population density.
Many valuable studies have documented the effects of environmental and socioeconomic factors on the spread of WNV. Despite this, a long-term study of these effects using data with high temporal resolution has yet to be undertaken. This study develops a multi-temporal analysis of the relationship between environmental variables and WNV dissemination using an integration of remote sensing, geographic information systems (GIS), and statistical techniques. The specific research objectives are to identify the spatial patterns of WNV outbreaks at different years and seasons in the city of Indianapolis, USA, to examine the relationships of WNV dissemination and environmental variables, and to investigate the temporal variations of the relationship. Through spatio-temporal analyses, it is possible to identify and explain the temporal outbreaks of WNV and the high-risk areas in the study area.
Indianapolis is a typical Midwest city lying in the flat plain and has a temperate climate without pronounced wet or dry seasons. Therefore, this study can offer not only valuable information for public health prevention and mosquito control, but also provide a testimony for the WNV spread in the other regions of the Midwest USA and beyond. | http://ij-healthgeographics.biomedcentral.com/articles/10.1186/1476-072X-7-66 |
4.40625 | OROGRAPHIC PRECIPITATION is caused or enhanced by 1 or more of the effects of mountains on the Earth’s atmosphere. These effects include the upward or lateral motions of air directly caused by mountains acting as a barrier as well as the thermal effects of the mountains that cause them to be elevated heat or cold sources. Mountains can generate both stratiform precipitation, which takes place in a statically stable atmosphere, and convective precipitation, which results from the release of static instability. The most obvious effect of mountains is that they can cause the air encountering them to rise. Rising air cools adiabatically, and if it is sufficiently humid, condensation and perhaps precipitation can occur. Precipitation formed by this mechanism is widely referred to as upslope precipitation. It is widely recognized that the slope on the upwind side of the prevailing wind (the windward side) generally receives more precipitation than the leeward side. In contrast, some of the world’s deserts are on the leeward side of mountain ranges. Because of interactions with other processes, however, there is no general rule about the location on the mountain slope where the maximum upslope precipitation will occur. | http://mytravelphoto.org/513-orographic-precipitation.html |
4.125 | Land degradation refers to any form of deterioration of the land that affects the integrity of the ecosystem. It encompass both a reduction of its productivity and of its native biological richness and maintenance of resilience.
Land degradation is a major threat to biodiversity, ecosystem stability, and society’s ability to function and its impact extends far beyond local or regional scale.
Because of the interconnectivity between ecosystems, land degradation triggers destructive processes that can have cascading effects across the entire biosphere. Loss of biomass, through vegetation clearance and soil erosion, produces greenhouse gases that contribute to global warming and climate change.
What we do
In 2003, the GEF was designated as a financial mechanism of the UNCCD, ensuring that GEF projects addressing desertification will be aligned with objectives of the Convention. In this way, the GEF works as a complementary financial mechanism to the Global Mechanism, collectively supporting implementation of the Convention.
Establishment of the land degradation focal area coupled with formal designation as a financial mechanism for the UNCDD offered a major boost to the GEF’s investment in sustainable land management projects.
The GEF as a financial mechanism of the UNCCD directly contributes to implementation of the10-year (2008–18) Strategic Plan and Framework approved by the Conference of Parties during its 8th Session. The Strategic Plan aims “to forge a global partnership to reverse and prevent desertification/land degradation and to mitigate the effects of drought in affected areas in order to support poverty reduction and environmental sustainability.”
Scope of the Challenge
Globally, land degradation affects 33 percent of the Earth’s land surface, with consequences hitting more than 2.6 billion people in more than 100 countries. One of the main indicators is extensive soil degradation caused by erosion, salinization, compaction, and nutrient depletion. Soil degradation leads to reduced capacity of the soil to sustain biomass production and biodiversity and regulate water and nutrient cycling. Land that becomes progressively degraded in this manner cannot sustain agricultural production, and creates socioeconomic problems in agro-ecosystems dominated by poor smallholder farmers and pastoralists. This effect can also be exacerbated by the increased vulnerability of people and agro-ecosystems to climate change and variability. [READ MORE ABOUT LAND DEGRADATION]
The GEF mandate to combat land degradation focuses on sustainable land management (SLM) as it relates primarily to desertification and deforestation. In this context, unsustainable agricultural practices, soil erosion, overgrazing, and deforestation are considered the main drivers of land degradation, all contributing to deterioration of ecosystem services. The GEF approaches land degradation in this way in order to address underlying causes while developing sustainable solutions. Desertification and deforestation are both caused, in part, by unsustainable agricultural practices, but their impacts also result in lower agricultural productivity. Putting into practice SLM principles is one of the few options for land users, especially smallholder farmers and pastoralists, to maintain or increase productivity of agro-ecosystems without destroying land, causing soil erosion or undermining the ecosystem services. [READ MORE ABOUT GEF MANDATE AND SLM…..PAGE 13]
The GEF project areas for financing include three major production practices: sustainable agriculture (crop-livestock systems), sustainable rangeland/pasture management (agro-pastoral systems), and sustainable forest and woodland management. [READ MORE….PAGE 22-25]
Sustainable Agriculture – The GEF investment in sustainable agriculture focuses on maintaining or improving the productivity of both rain-fed and irrigated systems. GEF support mainly targets sustainable land management practices such as crop diversification, crop rotation, conservation agriculture, agroforestry, water harvesting, and small-scale irrigation schemes.
Rangeland Management - The GEF supports sustainable management of rangelands through the strengthening of viable traditional systems and other measures that improve soil and water conservation. Interventions include the resolution of wildlife-livestock-crop conflicts, conservation of indigenous genetic resources, and reducing water and wind erosion in rangelands.
Sustainable Forest and Woodland Management - The GEF supports the introduction and strengthening of sustainable forest management schemes, including participatory decision making, tenure and use rights (especially by indigenous communities), sustainable market chains for forest products, development and implementation of forest management plans, and reforestation.
In addition to targeted interventions within these different systems, the GEF approach also emphasizes natural resource management in the context of wider landscapes. This allows for investments in effective management of competing land uses, trade-offs in ecosystem services, and opportunities to increase investment in SLM through diverse sources such as payments for ecosystem services (PES), carbon finance, and so forth. | https://www.thegef.org/gef/land_degradation |
4.1875 | The famous 'First thanksgiving feast' is said to have taken place in autumn, in the year 1621. The pilgrims organized the feast right after the first harvest. It was a gesture to thank God to help them survive the bitter winter. It was also celebrated as a display of gratitude towards Indians. The feast took place in Plymouth, Massachusetts. The traditional 'First feast' formed the basis for the modern 'Thanksgiving Day', celebrated on the fourth Thursday in November every year.
According to historians, the first thanksgiving feast was eaten outside, as the colonists didn't have sufficient space to accommodate everyone. Native Indians were invited to the feast as they were the ones who taught pilgrims how to grow food. The feast was held to rejoice their fruits of labour.
The feast is described in a firsthand account presumably written by a leader of the colony, Edward Winslow. According to him, the governor had sent four men to kill as many fowls. The feast was attended by 90 people including Indians (Native Americans). The food included, ducks, turkeys, geese, swan and venison, fish, berries, watercress, lobster, dried fruit, clams, and plums. The feast continued for three days. It was accompanied by lots of dancing and merry-making.
The feast was not repeated for the next few years. The next thanksgiving day was celebrated in the year 1676. The year witnessed a severe drought, which was eventually followed by rains due to prayers.
George Washington proclaimed a National Day of Thanksgiving in 1789. The idea attracted mixed reaction. After campaigning for nearly 80 years, in 1863, President Lincoln proclaimed the last Thursday in November as a national day of Thanksgiving. | http://www.thanksgiving-day.org/first-thanksgiving-day-feast.html |
4 | Center of mass
Have you ever played a game at a carnival, trying to win a stuffed animal or other prize? It might look easy—until you try it. Why are those "simple" games at the fairs, carnivals and Mardi Gras festivals so hard? Is it really lack of skill or coordination or do those midway vendors use some basic laws of science to help them set up the games in their favor? In this science activity you'll investigate how physics can help you win—or lose—at the classic game of trying to knock over a pyramid of milk bottles using a ball.
Why can it be nearly impossible to knock down pins or hit the right target to win that giant stuffed animal at the carnival or fair—especially when it looks so easy? To answer this, in this activity we'll look at the classic carnival game sometimes called "One Ball," in which milk bottles are stacked in a pyramid and you get one throw with a ball to try to knock them all over. To beat this game it's useful to think about how redistributing an object's mass can affect how well it balances. For example, it might be easy to stand on a balance beam while holding a heavy backpack hanging down in front of you, but it's much more challenging when that backpack is on your back.
How an object's mass is distributed can affect its center of mass, which is the average location of most of an object's mass. This basically means that the center of mass will be in the object's center if the mass is evenly distributed on top and bottom (as it usually is with a ball). And it will shift to the object's heavier side if the mass isn't uniformly distributed. So, for a pyramid, the center of mass will be much lower than it's physical center because so much more of the mass is located in the lower half.
- A very large room or an outside area (You will need plenty of space so you can throw a ball without hitting anyone or anything.)
- A small, stable table
- Masking tape, a stick, a rock or a similar object to mark off a throwing distance
- Tennis ball or baseball
- Three plastic bottles filled with water, all the same shape and size (Make sure that you can stably stack the bottles in a pyramid shape. Most 16.9–fluid ounce drinking-water bottles should work well for this.)
- Food coloring (optional)
- Fill each of the bottles with water. The same amount of water should be in each bottle.
- If you want, you can remove the labels from your bottles and add some food coloring (three drops) to each bottle to give your carnival game some color. If you add food coloring to the bottles, make sure you do your testing where it will not be a problem if a bottle spills some dyed water! (If you use dyed water, it's recommended you do this activity outdoors.)
- Take your materials to the very large room or area outside where you can set up your mini carnival game.
- Put the small table at a set distance from your "throw line." You might try about eight feet away from the table. Mark the throw line using a piece of masking tape, a rock, a stick or a similar object.
- Make sure the water bottle lids are on tight. Stack the three bottles into a stable pyramid shape on the table, with two bottles on the bottom and one on top that is centered between them and resting on their lids. How stable is your pyramid?
- From your throw line, throw the tennis ball (or baseball) at the bottle pyramid. Did you hit any bottles? If so, which bottle was hit? How many bottles were knocked over?
- Arrange the pyramid as it was before on the table.
- Repeat this process until you have hit the pyramid at least 10 times with the ball. Try to throw the ball the same way each time, and try to hit each bottle a few times. How many bottles usually get knocked over? Does it seem to depend on which bottle you hit with the ball? If so, which bottle(s) do you need to hit to knock over the most bottles?
- Now take the top bottle from the pyramid and empty the water out of it. Stack the bottles into a stable pyramid shape on the table with the empty bottle on the top. How stable does this pyramid seem?
- As you did before, throw the ball from the throw line at the pyramid at least 10 times, rearranging the pyramid on the table after each throw. Again try to throw the ball the same way each time, and try to hit each bottle a few times. How many bottles usually get knocked over now? Does hitting a certain bottle (or bottles) tend to knock over the most bottles?
- Take one of the bottom bottles from the pyramid and empty the water out of it. Stack the bottles into a stable pyramid shape on the table with the two empty bottles on the bottom and the water-filled bottle on the top. How stable does this pyramid seem?
- As you did before, throw the ball from the throw line at the pyramid at least 10 times, rearranging the pyramid on the table after each throw. Again try to throw the ball the same way each time, and try to hit each bottle a few times. How many bottles usually get knocked over with this pyramid arrangement? Does hitting a certain bottle (or bottles) tend to knock over the most bottles?
- Overall, which pyramid arrangement led to the highest number of throws where all three bottles were knocked over? In other words, which arrangement was most successful? Which was least successful? Which bottle(s) should be hit to cause the largest number of bottles to fall down?
- Extra: You could repeat this activity but this time you could quantify your results. That is, when testing each pyramid arrangement, write down which bottle is hit each time and how many bottles are knocked over. If you quantify your results, just how much more "successful" is one pyramid arrangement compared with another? How much better is it to hit one bottle compared with another?
- Extra: Try moving the throw line closer to the pyramid or farther away from it. How does your throwing distance from the pyramid change how successful you are at knocking it over?
- Extra: Instead of bottles you could use wooden blocks and arrange them in different configurations, such as stacking all three on top of one another. Using wooden blocks, which configuration is easiest to knock over? Which is hardest?
Observations and results
When the pyramid was arranged with two empty bottles on the bottom (and a filled bottle on top), did it fall over the easiest? Was hitting the bottom bottles typically the best approach?
A lot of people may initially think that the center of mass of the first bottle pyramid you made is in the middle of the structure (where the upper bottle rests on the lower ones). But in this configuration, with all of the bottles filled equally, it's actually closer to the middle of the two bottom bottles—because there are two bottles on the bottom, there is more mass on the bottom of the pyramid than the top. So the center of mass is closer to the pyramid's bottom. In the second pyramid arrangement the center of mass became even lower because more mass was in the bottom part of the pyramid compared with the top. In the third pyramid arrangement the center of mass became much higher—somewhere within the top bottle.
You should have found that the third pyramid, with its higher center of mass, was the easiest one to knock over and most unstable, likely having all three bottles fall over when the ball touched any of them. On the other hand, the second pyramid, with its lower center of mass, was likely the hardest to knock over completely. In general, hitting the lower area between the bottom bottles (below the center of mass of the pyramid) should have been the most successful approach for knocking down the entire pyramid. Except for the third pyramid, hitting the top bottle likely only knocked the top bottle off of the pyramid, leaving the two others still standing.
More to explore
The Scientific Method & Carnival Games, from Portage, Inc.
Center of Mass, from High School Online Collaborative Writing
Fun, Science Activities for You and Your Family, from Science Buddies
Knock Your Blocks Off: The Mechanics of Carnival Games, from Science Buddies
This activity brought to you in partnership with Science Buddies | http://www.scientificamerican.com/article/sporty-science-the-mechanics-of-a-carnival-game/?mobileFormat=true&shunter=1455285988723 |
4.15625 | This video is about Greenland's ice sheet, accompanied by computer models of the same, to show how the ice is melting, where the meltwater is going, and what it is doing both on the surface and beneath the ice.
This in-depth interactive slideshow about how climate models work is embedded with a lot of background information. It also describes some of the projected climate change impacts to key sectors such as water, ecosystems, food, coasts, health. (scroll down page for interactive)
In this activity, students read an article condensed from several NASA articles about the impact of deforestation on the atmosphere and answer review questions. They then use Google Earth and current NEO data to examine relationships between forest fires and atmospheric aerosols.
In this activity, students work in groups, plotting carbon dioxide concentrations over time on overheads and estimating the rate of change over five years. Stacked together, the overheads for the whole class show an increase on carbon dioxide over five years and annual variation driven by photosynthesis. This exercise enables students to practice basic quantitative skills and understand how important sampling intervals can be when studying changes over time. A goal is to see how small sample size may give incomplete picture of data.
Students examine data from Mauna Loa to learn about CO2 in the atmosphere. The students also examine how atmospheric CO2 changes through the seasonal cycle, by location on Earth, and over about 40 years and more specifically over 15 years. Students graph data in both the Northern and Southern Hemisphere and draw conclusions about hemispherical differences in CO2 release and uptake.
This short video clip summarizes NOAA's annual State of the Climate Report for 2009. It presents a comprehensive summary of Earth's climate in 2009 and establishes the last decade as the warmest on record. Reduced extent of Arctic sea ice, glacier volume, and snow cover reflect the effects of rising global temperature.
In this activity, students learn about the urban heat island effect by investigating which areas of their schoolyard have higher temperatures - trees, grass, asphalt, and other materials. Based on their results, they hypothesize how concentrations of surfaces that absorb heat might affect the temperature in cities - the urban heat island effect. Then they analyze data about the history of Los Angeles heat waves and look for patterns in the Los Angeles climate data and explore patterns.
This video from ClimateCentral looks at the way climate conditions can affect vegetation in the West, and what influence this has on wildfires. Drought and rainfall can have very different wildfire outcomes, depending on vegetation type, extent, and location. | https://www.climate.gov/teaching/literacy/5-b-observations-are-foundation-understanding-climate-system?keywords= |
4.21875 | January 10, 2013
Earth’s Mantle Magma Melts Hotter And Deeper Than Previously Thought
April Flowers for redOrbit.com - Your Universe Online
According to a new study by researchers at Rice University, the Earth's mantle magma melts far hotter and deeper in the Earth's core than previously thought, a discovery that will have lasting implications for our understanding of the planet's geophysical and geochemical properties.The research team, led by Rajdeep Dasgupta, put small amounts of peridotite under large pressures in a laboratory to determine that rock can and does liquefy, at least in small amounts, as deep as 155 miles beneath the ocean floor. Dasgupta claims that his findings, published in the journal Nature, will explain several puzzles that have been irking scientists.
The Earth's middle layer, the mantle, is a buffer of rock between the crust, which is the top five miles or so, and the core. The mantle would look like a roiling mass of rising and falling material if it were possible to compress millions of years of observation down to mere minutes. Materials are brought from deep within the planet to the surface by the slow but constant process of mantle convection, a phenomenon which also thought to be linked with Earth's geomagnetic fields. Occasionally, such materials are brought higher than the surface through volcanoes.
Because it is where the Earth's crust is created and, as Dasgupta says, where "the connection between the interior and surface world is established," the Rice team focused on the mantle beneath the ocean. Silicate magma rises with the convective currents to cool and spread out to form the ocean crust. Scientists have long believed that the starting point for this melt is around 45 miles beneath the surface.
Dasgupta, an assistant professor at Rice, said that this depth has confounded geologists who suspected but could not prove the existence of deeper silicate magma.
The density of the Earth's mantle is determined by measuring the speed of a seismic wave after an earthquake from its origin to other points on the planet. Such waves travel faster through solids than liquids. Geologists have been surprised to detect waves that are slowing down through what should be the mantle's "express lane."
"Seismologists have observed anomalies in their velocity data as deep as 200 kilometers beneath the ocean floor," Dasgupta said. "Based on our work, we show that trace amounts of magma are generated at this depth, which would potentially explain that."
The findings offer tantalizing new clues about the electrical conductivity of the oceanic mantle, as well. "The magma at such depths has a high enough amount of dissolved carbon dioxide that its conductivity is very high," Dasgupta said. "As a consequence, we can explain the conductivity of the mantle, which we knew was very high but always struggled to explain."
Though some are trying, humans have not yet dug deep enough to sample the mantle directly. This means that researchers must currently extrapolate data on rocks carried up to the surface. Melting in the Earth's deep upper mantle is caused by the presence of carbon dioxide, Dasgupta determined in an earlier study. The current work shows that carbons not only lead to making the silicate magma at significant depths.
Non-carbonated rocks melt at significantly higher temperatures than carbonated rocks. "This deep melting makes the silicate differentiation of the planet much more efficient than previously thought," Dasgupta explained. "Not only that, this deep magma is the main agent to bring all the key ingredients for life — water and carbon — to the surface of the Earth."
The Rice research team crushed tiny rock samples containing carbon dioxide to determine the depth of the magma's formation.
"Our field of research is called experimental petrology," he said. "We have all the necessary tools to simulate very high pressures (up to nearly 750,000 pounds per square inch for these experiments) and temperatures. We can subject small amounts of rock samples to these conditions and see what happens."
The team employed powerful hydraulic presses to partially melt "rocks of interest" containing tiny amounts of carbon to simulate what they believe is happening under equivalent pressures in the mantle.
"When rocks come from deep in the mantle to shallower depths, they cross a certain boundary called the solidus, where rocks begin to undergo partial melting and produce magmas," Dasgupta said.
"Scientists knew the effect of a trace amount of carbon dioxide or water would be to lower this boundary, but our new estimation made it 150-180 kilometers deeper from the known depth of 70 kilometers," he said.
"What we are now saying is that with just a trace of carbon dioxide in the mantle, melting can begin as deep as around 200 kilometers. And when we incorporate the effect of trace water, the magma generation depth becomes at least 250 kilometers. This does not generate a large amount, but we show the extent of magma generation is larger than previously thought and, as a consequence, it has the capacity to affect geophysical and geochemical properties of the planet as a whole." | http://www.redorbit.com/news/science/1112761657/earth%E2%80%99s-magma-mantle-melts-hotter-011013/ |
4.0625 | 6 Written questions
6 Multiple choice questions
- The first step in protein synthesis by which a DNA template is used to produce a single-stranded mRNA molecule that's a negative image of that DNA portion.
- A process of asexual reproduction in eukaryotic cells.
- A sequence of three nucleotide bases on mRNA that refers to (attracts) a specific amino acid (tRNA anti-codon).
- Complex network of DNA coiled around and supported by proteins, found in the nucleus of the cell.
- The second part of protein synthesis whereby genetic information coded in messenger RNA directs the formation of a specific protein at a ribosome in the cytoplasm.
- A three-nuleotide base sequese on tRNA.
5 True/False questions
messenger RNA → tRNA.
The type of RNA that transfers (carries) a particular amino acid to mRNA at the ribosome during protein synthesis.
histones → A process of asexual reproduction in eukaryotic cells.
centromere → The region that joins two sister chromatids.
nucleosomes → DNA wrapped around histones (spools) forms long DNA strands into "beads on a string".
interphase → The region that joins two sister chromatids. | https://quizlet.com/8508414/test |
4.09375 | PCR (Polymerase Chain Reaction Test) (cont.)
IN THIS ARTICLE
What is RT-PCR?
RT-PCR is a PCR test that is designed to detect and measure RNA. Although initial PCR tests amplified DNA, many viruses and other biological components (for example, mitochondria) utilize RNA as their genetic material. RT-PCR differs from conventional PCR by first taking RNA and converting the RNA strand into a DNA strand. This is done by essentially the same method for PCR described above with the exception of using an enzyme termed reverse transcriptase instead of the DNA polymerase. The reverse transcriptase allows a single strand of RNA to be translated into a complementary strand of DNA. Once that reaction occurs, the routine PCR method can then be used to amplify the DNA. RT-PCR has been used to detect and study many RNA viruses.
RT-PCR should not be confused with another variation of PCR, termed Real-Time PCR. Real-Time PCR is a variation of PCR that allows analysis of the amplified DNA during the usual 40 cycles of the procedure. Although the procedure is similar to conventional PCR with cycling, Real-Time PCR uses fluorescent dyes attached to some of the building blocks or small nucleotide strands. Depending on the method used, fluorescence occurs when the amplified DNA strands are formed. The amount of fluorescence can be measured throughout the 40 cycles, and allows the investigators to measure specific products and their amounts during the amplification cycles. This often allows investigators or lab technicians to skip the gel electrophoresis or other secondary procedures needed for analysis of the PCR products, thus producing more rapid results.
Real-Time PCR and RT-PCR are variations or modifications of the original PCR test. However, there are many more variations (at least 25) that exist and are used to solve specific problems. They all have different names such as Assembly PCR, Hot-start PCR, Multiplex PCR, Solid-phase PCR and many others.
PCR is likely to continue to be modified to help answer many other questions in medicine, biology. and other fields of study.
Medically reviewed by Martin E Zipser, MD; American board of Surgery
Medically Reviewed by a Doctor on 3/11/2015
Must Read Articles Related to PCR (Polymerase Chain Reaction Test)
Resources for Staying Well
- Early Care for Your Premature Baby
- What to Eat When You Have Cancer
- When to Take More Pain Medication | http://www.emedicinehealth.com/pcr_polymerase_chain_reaction_test/page4_em.htm |
4.03125 | In its most basic sense, irony is when what might be expected and what actually occurs are incongruent. This definition, however, covers a wide range of possibilities. It is often more helpful to look at specific types of irony and examples of those types. Most people gain a much clearer understanding of ironic concepts once types and examples of irony are presented. Knowledge of irony in its different manifestations can help people think more critically when encountering its various uses.
Situational irony occurs when someone's actions place him in a situation that is the opposite of what was intended or expected. Situational irony describes the situation of a safe-cracker who breaches a vault and stands inside admiring all the money he's about to steal. He hears the creak of hinges and looks over his shoulder as the vault door swings shut, sealing him in. The irony is that his expertise in breaking in has actually imprisoned him.
Verbal irony exists when what is said has a meaning in addition to or in direct opposition to its literal meaning. Verbal irony arises in the case of a teacher who doesn't realize that the student who just received an A on her spelling test is on the cusp of becoming a major music star. The teacher, indicating the test, says, "You rock!" In the speech context, the teacher means that the student did well. The irony is that, unknown to the teacher, the student also "rocks" in the musical sense of the word. In literature, verbal irony can accentuate drama as well as comedy, and it occurs heavily in satire.
Dramatic irony is frequently found in literature and drama. Dramatic irony is simply when the reader or audience knows something the character or characters do not know. The audience's knowledge heightens the tension in works of comedy or tragedy. In "Romeo and Juliet," dramatic irony plays a key role when Romeo shows up to the Capulet tomb believing that Juliet lies dead within. The audience knows she is not dead, so Romeo's poisoning of himself has a more intense dramatic effect.
In modern society, Socratic irony may come across as more passive-aggressive than other ironies. With Socratic irony, a questioner adopts a facade of innocence and ignorance to ask seemingly simplistic questions, with the goal of leading a person to conclusions that display his or her own ignorance. For example, if someone were to contend, "Gun control will lessen violence," the Socratic questioner might respond, "Then guns cause violence?" This leaves the speaker with few options that do not detract from his or her main point. A danger of attempting to use Socratic irony is that listeners often point out its passive-aggressive tone, which can damage the user's credibility.
Style Your World With Color
See if her signature black pairs well with your personal style.View Article
Let your imagination run wild with these easy-to-pair colors.View Article
Explore a range of cool greys with the year's top colors.View Article
Let your clothes speak for themselves with this powerhouse hue.View Article
- The Oxford Companion to the English Language; ed. Tom McArthur
- Writing Fiction: A Guide to Narrative Craft, 5th ed.; Janet Burroway
- The American Heritage College Dictionary: Irony
- A Rhetoric of Irony; Wayne C. Booth
- Visage/Stockbyte/Getty Images | http://classroom.synonym.com/describes-irony-3042.html |
4.03125 | There's been a lot of speculation recently about what would happen if a large meteor or comet were to strike the Earth but, come October 2014, we may see a very real example of such an impact as a newly discovered comet has an extremely close encounter with the planet Mars.
Comet C/2013 A1 (Siding Spring) was discovered on January 3rd by veteran comet hunter Robert H. McNaught at the Siding Spring Observatory in New South Wales, Australia. Upon learning of this discovery, astronomers at the Catalina Sky Survey dug through their database and found that they had collected images of the comet back on December 8th. Using the combined observations, the astronomers pieced together what they could of the comet's orbit, and even with that limited information they calculated that the object was going to fly within 100,000 kilometres of Mars on October 19th, 2014.
That's a pretty wide pass, relatively speaking. Asteroid 2012 DA14 came within 27,000 kilometers of the Earth back on February 15th.
However, as is always the case with finding new objects floating around in our solar system, with more observations come better estimates of the path that those objects will take as they whiz through space.
As of Wednesday, February 27th, fresh observations of C/2013 A1 have shifted its path over 60,000 kilometres closer to Mars, and it is now expected to pass within 37,000 kilometres of the planet's surface!
That would probably be the end of the story if we were talking about an asteroid or meteor, and we'd answer the question in the headline with a resounding 'No!'. However, a comet is a different matter entirely — made of different matter, that is. Ice (well, mostly).
Where C/2103 A1 is now, over one billion kilometres from the Sun, it may as well be an asteroid or meteor, since it's just a big, mostly-inert hunk of ice and rock flying through space. However, as the comet gets closer to the Sun and starts to heat up, that ice is going to start turning back into gas. Whatever side of the comet that is facing the sun is going to be turned into an icescape of erupting geysers, and those geysers are going to push on the comet like thruster engines, slowly nudging it off its original course. Where it will be pushed, though, is anyone's guess right now, because if the hunk of ice isn't already rotating, it will be once the geysers start up, and that will add more uncertainty to any calculations of its orbit. It may be pushed further away from Mars, or it may be pushed into a direct collision course with the planet.
So, what will it be like if the comet does hit Mars? The word cataclysmic comes to mind.
It's hard to narrow down the size of C/2013 A1 at the moment, due to its distance and the small cloud of gas (the coma) it's generating around it, but it is estimated at being anywhere from 8 to 50 kilometres wide, and it's traveling at about 55 km/s. Even at the smaller end of the scale, an object that large would hit the planet with the force of about 100 million megatons, blasting a crater over 100 kilometres wide into the Martian surface. If you ramp that up to the bigger end of the scale, the blast becomes more like 20 billion megatons and the crater would be near 600 kilometres wide!
Either way, the effects would be devastating, both for the planet and for the various satellites and rovers we have investigating Mars right now. Even if the rovers weren't anywhere near ground-zero, and the satellites were spared a direct hit, the impact would blast tons of rock into space and throw up a dust storm that would fill the planet's atmosphere for months or even years after. Curiosity, with its nuclear power supply, could most likely continue on, if it wasn't physically damaged by any aspect of the impact, but Opportunity would lose power as the dust blotted out the Sun, and most (if not all) of the satellites in orbit would be destroyed by the newly orbiting debris field.
Could there be an 'up' side to this scenario?
It would certainly give us the opportunity to observe such an event real-time, to get a better idea of how such an impact would affect the Earth. It could also introduce an incredible amount of water and gases to Mars' environment, which (once the dust settles) could actually have a beneficial effect on any plans we might still have for colonizing the Red Planet in the future.
[ More Geekquinox: Married couple wanted for 500-day trip to Mars ]
Even if the comet doesn't hit the planet (which is, admittedly, still the more likely of the two possibilities), the cloud of gas that the comet will be generating around it by the time it reaches Mars will be enormous — possibly ten times the distance between it and the planet at its closest pass — so not only could the planet get a spectacular meteor shower as it flies by, but the ice, rock, dust and gases of the coma could still damage all the satellites we have in orbit.
Astronomers are continuing their observations of C/2013 A1, of course, and their efforts will help us to narrow down the orbit of the comet even more as heads towards its rendezvous with the Red Planet next year. Let's just hope that everyone comes away from that meeting in one piece.
Geek out with the latest in science and weather.
Follow @ygeekquinox on Twitter! | https://ca.news.yahoo.com/blogs/geekquinox/mars-deep-impact-comet-hit-red-planet-2014-132854184.html |
4.15625 | Predicting: the process of gathering information and combining it with the reader’s own knowledge to guess what might occur in the story
Foreshadowing: the organization and presentation of events and scenes in a work of fiction or drama so that the reader or observer is prepared to some degree for what occurs later in the work. This can be part of the general atmosphere of the work, or it can be a specific scene or object that gives a clue or hint as to a later development of the plot.
In this lesson, students will read and discuss "The Monkey's Paw," by W.W. Jacobs and explore the use of prediciting and foreshadowing throughout the story. Students will:
Identify and find examples of foreshadowing and explore how they can be used to predict what will happen later in the story.
Make predictions when asked at specific points in the story
How does making predictions and recognizing foreshadowing throughout a story provoke thinking and response?
45 minutes/1-2 classes
"The Monkey's Paw," by W.W. Jacobs. Copyright 1902. Worthington Printing Press.
Reading notebook/ one per student
Copy of prediction chart: http://www.readwritethink.org/files/resources/lesson_images/lesson420/prediction.pdf
SmartBoard (if available) or overhead projector
W: During this unit, students will explore the meaning of foreshadowing and predicting, identify examples of both literary terms within a story. They will make their own predictions and pick out example/clues of foreshadowing throughout the story. They will be evaluated by the completion of the student notebook, Prediction Chart, and the "Ticket to the Door."
H: Having the students work cooperatively and collaborately about what they think foreshadowing and predicing is will give them the opportunity to have their own opninions/thoughts and see the opinions/thoughts of others.
E: By reading, "The Monkey's Paw," students explore the meanings of foreshadowing and prediction, and how these terms are used throughout the story.
R: They will be reflecting through completion of their student notebooks and Prediction Charts, revisiting by having a classroom discussion and going over the answers, revising by giving the students an opportunity to answer whether their prediction was right or wrong and why, and rethinking by making them fill out a "Ticket out the Door."
E: They will express their understanding by completing their student notebooks and Prediction Charts by discussing predictions as a class. Students will engage in a meaningful self-evaluation.
T: The instruction attached to the lesson can meet each child's needs. During the planning portion of the lesson, note specific examples that certain students can make. Allow supported students to be one of the first examples about foreshadowing/predicting to help boost confidence. As an intervention for students who have trouble reading, they will be given the opportunity to listen to the story on the computer while they follow along with the text. Students may also be given a copy a photocopy of the story so they can use a highlighter or higlighting tape examples of foreshadowing instead of writing the examples in a journal. They can also take notes directly on the story pages. The lesson will also include working individually, collaboratively, and as a whole group.
O: The lesson begins with working collaboratively with another student, works through reading the story as a class, but each student filling out their own notebooks and Prediction charts, then the class works together with the use of tecnology to go over their findings throughout the story. The students have to independently fill out a "Ticket out the Door."
1. Two questions: 1. "What does it mean to predict?" 2. "What does foreshadowing mean?", should be posted on the board for the students to answer. Have students get out their notebooks and work with a partner to answer these questions in their individual notebooks. When all the students have completed this activity, have an oral discussion about their answers and then post the correct definitions to these words on the board. Define predicting, foreshadowing here.
2. Begin reading the story as a class. Ask the students to write down in their journals any foreshadowing clues/example they might discover during the story. Make sure students write down the page number of where they found the foreshadowing clue.
3. Halfway through each act and at the end of each act (except for act 3), ask the students to make a prediction of what they think is going to happen next in the story basedon their foreshadowing clues and write it in their prediction chart. At the end of act 3, when story is over, ask students to complete their Prediction Chart as to whether their predictions were true and false and why.
4. At the end of the story, discuss with the class the story and the ending, ask the students how many predicted the ending correctly, and explain why they were correct/incorrect using examples from the text. Then ask students to complete their Prediction Chart as to whether their predictions were true and false and explain why citing examples from the text.
5. After reviewing the ending of the story, set up the SmartBoard or overhead projector. Start with the foreshadowing clues/examples students found in the story and how those clues helped the students predict parts of the story. (i.e. Foreshadowing example: when Herbert dies, the reader should know that his parent's wish is going to be to bring him back to life). Write these examples/clues on the SmartBoard/overhead. (Teacher guide attached)C:\Documents and Settings\stewartj\Desktop\Examples of foreshadowing for teachers.docx
6. When this is complete, explore how some of the foreshadowing examples/clues helped in prediciting what was going to happen next in the story.
7. Using the SmartBoard (or overhead), have the students volunteer and come up and write their predictions down. Ask the students to elaborate on their answers by pulling in any foreshadowing examples/clues they may have used to help them with their predictions.
8. At the end of the lesson, collect the students' notebooks used throughout the lesson to write down any foreshadowing examples/clues they found while reading the story. Also, collect their Prediction Charts. This will help recognize the students who understand and grasp the concept of foreshadowing and predicting in the story and those who did not. For those who did not understand the concept, reteaching or additional practice may be necessary.
9. As a "Ticket out the Door" when leaving class, ask the students to write down one fact and example they learned about foreshadowing and one fact with an example about predicting.
Ongoing assessment by appropriate usage of terms, classroom observations, and interactive group work.
Formal assessment of usage of foreshadowing and prediction in the story at culmination of unit (summative)
Special Adaptations/Differentiated Instruction: | http://www.pdesas.org/module/content/resources/4893/view.ashx |
4.28125 | Calligraphy, or the art of writing, was the visual art form prized above all others in traditional China. The genres of painting and calligraphy emerged simultaneously, sharing identical tools—namely, brush and ink. Yet calligraphy was revered as a fine art long before painting; indeed, it was not until the Song dynasty, when painting became closely allied with calligraphy in aim, form, and technique, that painting shed its status as mere craft and joined the higher ranks of the fine arts (1989.363.33; 1973.120.5).
The elevated status of calligraphy reflects the importance of the word in China. This was a culture devoted to the power of the word. From the beginning, emperors asserted their authority for posterity as well as for the present by engraving their own pronouncements on mountain sides and on stone steles erected at outdoor sites. In pre-modern China, scholars, whose main currency was the written word, came to assume the dominant positions in government, society, and culture.
But in addition to the central role played by the written word in traditional Chinese culture, what makes the written language distinctive is its visual form. Learning how to read and write Chinese is difficult because there is no alphabet or phonetic system. Each written Chinese word is represented by its own unique symbol, a kind of abstract diagram known as a “character,” and so each word must be learned separately through a laborious process of writing and rewriting the character till it has been memorized. To read a newspaper requires a knowledge of around 3,000 characters; a well-educated person is familiar with about 5,000 characters; a professor with perhaps 8,000. More than 50,000 characters exist in all, the great majority never to be used.
Yet the limitation of the written Chinese language is also its strength. Unlike written words formed from alphabets, Chinese characters convey more than phonetic sound or semantic meaning. Traditional writings about calligraphy suggest that written words play multiple roles: not only does a character denote specific meanings, but its very form should reveal itself to be a moral exemplar, as well as a manifestation of the energy of the human body and the vitality of nature itself.
Consider two Tang-dynasty texts that describe calligraphy in human terms, both physical and moral. Here, the properly written character assumes the identity of a Confucian sage, strong in backbone, but spare in flesh:
“[A written character should stand] balanced on all four sides . . . Leaning or standing upright like a proper gentleman, the upper half [of the character] sits comfortably, while the bottom half supports it.” (From an anonymous essay, Tang dynasty)
“Calligraphy by those good in brush strength has much bone; that by those not good in brush strength has much flesh. Calligraphy that has much bone but slight flesh is called sinew-writing; that with much flesh but slight bone is called ink-pig. Calligraphy with much strength and rich in sinew is of sagelike quality; that with neither strength nor sinew is sick. Every writer proceeds in accordance with the manifestation of his digestion and respiration of energy.” (From Bizhentu, 7th century)
Other writings on calligraphy use nature metaphors to express the sense of wonder, the elemental power, conveyed by written words:
“[When viewing calligraphy,] I have seen the wonder of a drop of dew glistening from a dangling needle, a shower of rock hailing down in a raging thunder, a flock of geese gliding [in the sky], frantic beasts stampeding in terror, a phoenix dancing, a startled snake slithering away in fright. (Sun Guoting, 7th century)
A dragon leaping at the Gate of Heaven,
A tiger crouching at the Phoenix Tower.
(Description of the calligraphy of Wang Xizhi by Emperor Wu [r. 502-49])
And so, despite its abstract appearance, calligraphy is not an abstract form. Chinese characters are dynamic, closely bound to the forces of nature and the kinesthetic energies of the human body. But these energies are contained within a balanced framework—supported by a strong skeletal structure—whose equilibrium suggests moral rectitude, indeed, that of the writer himself.
How can a simple character convey all this? The use of brush and ink has much to do with it. The seeming simplicity of the tools is belied by the complexity of effects. A multiplicity of effect is produced in part by varying the consistency and amount of ink carried by the brush. Black ink is formed into solid sticks or cakes that are ground in water on a stone surface (1981.120.1a-c) to produce a liquid. The calligrapher can control the thickness of the ink by varying both the amount of water and the solid ink that is ground. Once he starts writing, by loading the brush sometimes with more ink, sometimes with less, by allowing the ink to almost run out before dipping the brush in the ink again, he creates characters that resemble a shower of rock here, the wonder of a drop of dew there.
The brush, above all, contributes to the myriad possibilities (1994.208). Unlike a rigid instrument such as a stylus or a ballpoint pen, a flexible hair brush allows not only for variations in the width of strokes, but, depending on whether one uses the tip or side of the brush, one can create either two-dimensional or three-dimensional effects. And depending on the speed with which one wields the brush and the amount of pressure exerted on the writing surface, one can create a great variety of effects: rapid strokes bring a leaping dragon to life; deliberate strokes convey the upright posture of a proper gentleman.
The brush becomes an extension of the writer’s arm, indeed, his entire body. But the physical gestures produced by the wielding of the brush reveal much more than physical motion; they reveal much of the writer himself-his impulsiveness, restraint, elegance, rebelliousness (1989.363.17; 1989.363.12). Abstract as it appears, calligraphy more readily conveys emotion and something of the individual artist than all the other Chinese visual arts except for landscape painting, which became closely allied with calligraphy. It is no wonder that twentieth-century American Abstract Expressionists felt a kinship to Chinese calligraphers.
But expressive as calligraphy is, it is also an art of control. A counterbalance of order and dynamism is manifested in all aspects of Chinese writing. In traditional Chinese texts, words are arranged in vertical columns that are read from right to left. Traditional texts have no punctuation; nor are proper nouns visually distinguishable from other words. The orderly arrangement of characters is inherent in each individual character as well. One does not write characters in haphazard fashion: an established stroke order ensures that a character is written exactly the same way each time. This not only makes the formidable task of memorization easier, but ensures that each character will be written with a sense of balance and proportion, and that one is able to write with an uninterrupted flow and rhythm. The calligrapher and the dancer have much in common: each must learn choreographed movements; each must maintain compositional order. But once the rules have been observed, each may break free within certain boundaries to express a personal vitality.
The Chinese written language began to develop more than 3,000 years ago and eventually evolved into five basic script types, all of which are still in use today. The earliest writing took the form of pictograms and ideographs that were incised onto the surfaces of jades and oracle bones, or cast into the surface of ritual bronze vessels. Then, as the written language began to take standardized form, it evolved into “seal” script, so named because it remained the script type used on personal seals. By the later Han dynasty (2nd century A.D.), a new regularized form of script known as lishu or “clerical” script, used by government clerks, appeared. It was also in the Han that the flexible hair brush came into regular use, its supple tip producing effects, such as the final wavelike diagonal strokes of some characters, that were not attainable in incised characters. Increasingly cursive forms of writing, known as “running” script (xingshu) (1984.174) and “cursive” script (caoshu), also developed around this time, both as a natural evolution and a response to the aesthetic potential of brush and ink. In these scripts, individual characters are written in abbreviated form. At their most cursive, two or more characters may be linked together, written in a single flourish of the brush. As the individual brushstrokes of clerical script were inflected with the more fluid and asymmetrical features of cursive script, a final script type, known as “standard” script (kaishu), evolved. In this elegant form of writing, each brushstroke is clearly articulated through a complex series of brush movements. These kinaesthetic brushstrokes are then integrated into a dynamically balanced, self-contained whole.
Over the centuries, calligraphers were free to write in any of the five script styles, depending on the text’s function. Beginning by emulating the styles of earlier masters, later writers sought to transform their models to achieve their own personal manner (2000.345.1,2). The calligraphic tradition remains alive today in the work of many contemporary Chinese artists.
Delbanco, Dawn. “Chinese Calligraphy.” In Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art, 2000–. http://www.metmuseum.org/toah/hd/chcl/hd_chcl.htm (April 2008)
Barnhart, Richard M. "Chinese Calligraphy: The Inner World of the Brush." Metropolitan Museum of Art Bulletin 30 (April–May 1972), pp. 230–41.
Harrist, Robert E., Jr., and Wen C. Fong. The Embodied Image: Chinese Calligraphy from the John B. Elliott Collection. Princeton: Art Museum, Princeton University, 1999. | http://metmuseum.org/toah/hd/chcl/hd_chcl.htm |
4.15625 | Total # Posts: 11
4th grade math
Usually used with multiplication, creating an expanded algorithm is taking a formula and "drawing it out," or making it longer (and thus more simplified) by "un-distributing" it. For example: Expand (a+b)n. Answer: an + bn Why: In the problem, n should be ...
October 26, 2011
If 40.0 g of water at 70 degrees C is mixed with 40.0 g of ethanol at 10.0 degrees C, what is the final temperature of the mixture? How does one go about solving this problem? I found the specific heat of water and ethanol, but now what?
October 23, 2011
Make sure that all of the variables(sunlight, temperature, space, amount of liquid) are the same for both plants. The only thing that should be different is the type of liquid you give each plant. Good luck!
March 7, 2011
9th grade biology
I believe that the "lines" that you would see (the cell wall and the plasma/cell membrane) would be thicker. They may even look like someone traced them with a pen, for example.
February 8, 2011
9th grade science
I think what you mean is the cell membrane/plasma membrane of a cell. If so, they are the same thing. Unlike the cell wall, the plasma membrane is flexible. In a plant cell, the plasma membrane pushes out against the cell wall to keep its shape. It is always immediately ...
December 16, 2010
During DNA replication, DNA Polymerase matches up the pairs of nitrogenous bases, prepared by the RNA primer (made by the enzyme RNA Primase)to create a new segment of DNA. They are matched to the bases on the parent strand. This occurs on both the leading strand and the ...
October 28, 2010 | http://www.jiskha.com/members/profile/posts.cgi?name=TheQuestion |
4.1875 | The typical rash seen in dengue fever
|Classification and external resources|
|Pronunciation||UK // or US //|
|Synonyms||dengue, breakbone fever|
|Patient UK||Dengue fever|
Dengue fever is a mosquito-borne tropical disease caused by the dengue virus. Symptoms typically begin three to fourteen days after infection. This may include a high fever, headache, vomiting, muscle and joint pains, and a characteristic skin rash. Recovery generally takes less than two to seven days. In a small proportion of cases, the disease develops into the life-threatening dengue hemorrhagic fever, resulting in bleeding, low levels of blood platelets and blood plasma leakage, or into dengue shock syndrome, where dangerously low blood pressure occurs.
Dengue is spread by several species of mosquito of the Aedes type, principally A. aegypti. The virus has five different types; infection with one type usually gives lifelong immunity to that type, but only short-term immunity to the others. Subsequent infection with a different type increases the risk of severe complications. A number of tests are available to confirm the diagnosis including detecting antibodies to the virus or its RNA.
A novel vaccine for dengue fever has been approved in three countries, but it is not yet commercially available. Prevention is by reducing mosquito habitat and limiting exposure to bites. This may be done by getting rid of or covering standing water and wearing clothing that covers much of the body. Treatment of acute dengue is supportive and includes giving fluid either by mouth or intravenously for mild or moderate disease. For more severe cases blood transfusion may be required. About half a million people require admission to hospital a year. Nonsteroidal anti-inflammatory drug (NSAIDs) such as ibuprofen should not be used.
Dengue has become a global problem since the Second World War and is common in more than 110 countries. Each year between 50 and 528 million people are infected and approximately 20,000 die. The earliest descriptions of an outbreak date from 1779. Its viral cause and spread were understood by the early 20th century. Apart from eliminating the mosquitoes, work is ongoing for medication targeted directly at the virus.
- 1 Signs and symptoms
- 2 Cause
- 3 Mechanism
- 4 Diagnosis
- 5 Prevention
- 6 Management
- 7 Epidemiology
- 8 History
- 9 Research
- 10 Notes
- 11 References
- 12 External links
Signs and symptoms
Typically, people infected with dengue virus are asymptomatic (80%) or have only mild symptoms such as an uncomplicated fever. Others have more severe illness (5%), and in a small proportion it is life-threatening. The incubation period (time between exposure and onset of symptoms) ranges from 3 to 14 days, but most often it is 4 to 7 days. Therefore, travelers returning from endemic areas are unlikely to have dengue if fever or other symptoms start more than 14 days after arriving home. Children often experience symptoms similar to those of the common cold and gastroenteritis (vomiting and diarrhea) and have a greater risk of severe complications, though initial symptoms are generally mild but include high fever.
The characteristic symptoms of dengue are sudden-onset fever, headache (typically located behind the eyes), muscle and joint pains, and a rash. The alternative name for dengue, "breakbone fever", comes from the associated muscle and joint pains. The course of infection is divided into three phases: febrile, critical, and recovery.
The febrile phase involves high fever, potentially over 40 °C (104 °F), and is associated with generalized pain and a headache; this usually lasts two to seven days. Nausea and vomiting may also occur. A rash occurs in 50–80% of those with symptoms in the first or second day of symptoms as flushed skin, or later in the course of illness (days 4–7), as a measles-like rash. A rash described as "islands of white in a sea of red" has also been observed. Some petechiae (small red spots that do not disappear when the skin is pressed, which are caused by broken capillaries) can appear at this point, as may some mild bleeding from the mucous membranes of the mouth and nose. The fever itself is classically biphasic or saddleback in nature, breaking and then returning for one or two days.
In some people, the disease proceeds to a critical phase as fever resolves. During this period, there is leakage of plasma from the blood vessels, typically lasting one to two days. This may result in fluid accumulation in the chest and abdominal cavity as well as depletion of fluid from the circulation and decreased blood supply to vital organs. There may also be organ dysfunction and severe bleeding, typically from the gastrointestinal tract. Shock (dengue shock syndrome) and hemorrhage (dengue hemorrhagic fever) occur in less than 5% of all cases of dengue, however those who have previously been infected with other serotypes of dengue virus ("secondary infection") are at an increased risk. This critical phase, while rare, occurs relatively more commonly in children and young adults.
The recovery phase occurs next, with resorption of the leaked fluid into the bloodstream. This usually lasts two to three days. The improvement is often striking, and can be accompanied with severe itching and a slow heart rate. Another rash may occur with either a maculopapular or a vasculitic appearance, which is followed by peeling of the skin. During this stage, a fluid overload state may occur; if it affects the brain, it may cause a reduced level of consciousness or seizures. A feeling of fatigue may last for weeks in adults.
Dengue can occasionally affect several other body systems, either in isolation or along with the classic dengue symptoms. A decreased level of consciousness occurs in 0.5–6% of severe cases, which is attributable either to inflammation of the brain by the virus or indirectly as a result of impairment of vital organs, for example, the liver.
Other neurological disorders have been reported in the context of dengue, such as transverse myelitis and Guillain-Barré syndrome. Infection of the heart and acute liver failure are among the rarer complications.
Dengue fever virus (DENV) is an RNA virus of the family Flaviviridae; genus Flavivirus. Other members of the same genus include yellow fever virus, West Nile virus, St. Louis encephalitis virus, Japanese encephalitis virus, tick-borne encephalitis virus, Kyasanur forest disease virus, and Omsk hemorrhagic fever virus. Most are transmitted by arthropods (mosquitoes or ticks), and are therefore also referred to as arboviruses (arthropod-borne viruses).
The dengue virus genome (genetic material) contains about 11,000 nucleotide bases, which code for the three different types of protein molecules (C, prM and E) that form the virus particle and seven other types of protein molecules (NS1, NS2a, NS2b, NS3, NS4a, NS4b, NS5) that are found in infected host cells only and are required for replication of the virus. There are five strains of the virus, called serotypes, of which the first four are referred to as DENV-1, DENV-2, DENV-3 and DENV-4. The fifth type was announced in 2013. The distinctions between the serotypes are based on their antigenicity.
Dengue virus is primarily transmitted by Aedes mosquitoes, particularly A. aegypti. These mosquitoes usually live between the latitudes of 35° North and 35° South below an elevation of 1,000 metres (3,300 ft). They typically bite during the early morning and in the evening, but they may bite and thus spread infection at any time of day. Other Aedes species that transmit the disease include A. albopictus, A. polynesiensis and A. scutellaris. Humans are the primary host of the virus, but it also circulates in nonhuman primates. An infection can be acquired via a single bite. A female mosquito that takes a blood meal from a person infected with dengue fever, during the initial 2–10 day febrile period, becomes itself infected with the virus in the cells lining its gut. About 8–10 days later, the virus spreads to other tissues including the mosquito's salivary glands and is subsequently released into its saliva. The virus seems to have no detrimental effect on the mosquito, which remains infected for life. Aedes aegypti is particularly involved, as it prefers to lay its eggs in artificial water containers, to live in close proximity to humans, and to feed on people rather than other vertebrates.
Dengue can also be transmitted via infected blood products and through organ donation. In countries such as Singapore, where dengue is endemic, the risk is estimated to be between 1.6 and 6 per 10,000 transfusions. Vertical transmission (from mother to child) during pregnancy or at birth has been reported. Other person-to-person modes of transmission have also been reported, but are very unusual. The genetic variation in dengue viruses is region specific, suggestive that establishment into new territories is relatively infrequent, despite dengue emerging in new regions in recent decades.
Severe disease is more common in babies and young children, and in contrast to many other infections, it is more common in children who are relatively well nourished. Other risk factors for severe disease include female sex, high body mass index, and viral load. While each serotype can cause the full spectrum of disease, virus strain is a risk factor. Infection with one serotype is thought to produce lifelong immunity to that type, but only short-term protection against the other three. The risk of severe disease from secondary infection increases if someone previously exposed to serotype DENV-1 contracts serotype DENV-2 or DENV-3, or if someone previously exposed to DENV-3 acquires DENV-2. Dengue can be life-threatening in people with chronic diseases such as diabetes and asthma.
Polymorphisms (normal variations) in particular genes have been linked with an increased risk of severe dengue complications. Examples include the genes coding for the proteins known as TNFα, mannan-binding lectin, CTLA4, TGFβ, DC-SIGN, PLCE1, and particular forms of human leukocyte antigen from gene variations of HLA-B. A common genetic abnormality, especially in Africans, known as glucose-6-phosphate dehydrogenase deficiency, appears to increase the risk. Polymorphisms in the genes for the vitamin D receptor and FcγR seem to offer protection against severe disease in secondary dengue infection.
When a mosquito carrying dengue virus bites a person, the virus enters the skin together with the mosquito's saliva. It binds to and enters white blood cells, and reproduces inside the cells while they move throughout the body. The white blood cells respond by producing a number of signaling proteins, such as cytokines and interferons, which are responsible for many of the symptoms, such as the fever, the flu-like symptoms and the severe pains. In severe infection, the virus production inside the body is greatly increased, and many more organs (such as the liver and the bone marrow) can be affected. Fluid from the bloodstream leaks through the wall of small blood vessels into body cavities due to capillary permeability. As a result, less blood circulates in the blood vessels, and the blood pressure becomes so low that it cannot supply sufficient blood to vital organs. Furthermore, dysfunction of the bone marrow due to infection of the stromal cells leads to reduced numbers of platelets, which are necessary for effective blood clotting; this increases the risk of bleeding, the other major complication of dengue fever.
Once inside the skin, dengue virus binds to Langerhans cells (a population of dendritic cells in the skin that identifies pathogens). The virus enters the cells through binding between viral proteins and membrane proteins on the Langerhans cell, specifically the C-type lectins called DC-SIGN, mannose receptor and CLEC5A. DC-SIGN, a non-specific receptor for foreign material on dendritic cells, seems to be the main point of entry. The dendritic cell moves to the nearest lymph node. Meanwhile, the virus genome is translated in membrane-bound vesicles on the cell's endoplasmic reticulum, where the cell's protein synthesis apparatus produces new viral proteins that replicate the viral RNA and begin to form viral particles. Immature virus particles are transported to the Golgi apparatus, the part of the cell where some of the proteins receive necessary sugar chains (glycoproteins). The now mature new viruses bud on the surface of the infected cell and are released by exocytosis. They are then able to enter other white blood cells, such as monocytes and macrophages.
The initial reaction of infected cells is to produce interferon, a cytokine that raises a number of defenses against viral infection through the innate immune system by augmenting the production of a large group of proteins mediated by the JAK-STAT pathway. Some serotypes of dengue virus appear to have mechanisms to slow down this process. Interferon also activates the adaptive immune system, which leads to the generation of antibodies against the virus as well as T cells that directly attack any cell infected with the virus. Various antibodies are generated; some bind closely to the viral proteins and target them for phagocytosis (ingestion by specialized cells and destruction), but some bind the virus less well and appear instead to deliver the virus into a part of the phagocytes where it is not destroyed but is able to replicate further.
It is not entirely clear why secondary infection with a different strain of dengue virus places people at risk of dengue hemorrhagic fever and dengue shock syndrome. The most widely accepted hypothesis is that of antibody-dependent enhancement (ADE). The exact mechanism behind ADE is unclear. It may be caused by poor binding of non-neutralizing antibodies and delivery into the wrong compartment of white blood cells that have ingested the virus for destruction. There is a suspicion that ADE is not the only mechanism underlying severe dengue-related complications, and various lines of research have implied a role for T cells and soluble factors such as cytokines and the complement system.
Severe disease is marked by the problems of capillary permeability (an allowance of fluid and protein normally contained within blood to pass) and disordered blood clotting. These changes appear associated with a disordered state of the endothelial glycocalyx, which acts as a molecular filter of blood components. Leaky capillaries (and the critical phase) are thought to be caused by an immune system response. Other processes of interest include infected cells that become necrotic—which affect both coagulation and fibrinolysis (the opposing systems of blood clotting and clot degradation)—and low platelets in the blood, also a factor in normal clotting.
|Worsening abdominal pain|
|High hematocrit with low platelets|
|Lethargy or restlessness|
The diagnosis of dengue is typically made clinically, on the basis of reported symptoms and physical examination; this applies especially in endemic areas. However, early disease can be difficult to differentiate from other viral infections. A probable diagnosis is based on the findings of fever plus two of the following: nausea and vomiting, rash, generalized pains, low white blood cell count, positive tourniquet test, or any warning sign (see table) in someone who lives in an endemic area. Warning signs typically occur before the onset of severe dengue. The tourniquet test, which is particularly useful in settings where no laboratory investigations are readily available, involves the application of a blood pressure cuff at between the diastolic and systolic pressure for five minutes, followed by the counting of any petechial hemorrhages; a higher number makes a diagnosis of dengue more likely with the cut off being more than 10 to 20 per 1 inch2 (6.25 cm2).
The diagnosis should be considered in anyone who develops a fever within two weeks of being in the tropics or subtropics. It can be difficult to distinguish dengue fever and chikungunya, a similar viral infection that shares many symptoms and occurs in similar parts of the world to dengue. Often, investigations are performed to exclude other conditions that cause similar symptoms, such as malaria, leptospirosis, viral hemorrhagic fever, typhoid fever, meningococcal disease, measles, and influenza. Zika fever also has similar symptoms as dengue.
The earliest change detectable on laboratory investigations is a low white blood cell count, which may then be followed by low platelets and metabolic acidosis. A moderately elevated level of aminotransferase (AST and ALT) from the liver is commonly associated with low platelets and white blood cells. In severe disease, plasma leakage results in hemoconcentration (as indicated by a rising hematocrit) and hypoalbuminemia. Pleural effusions or ascites can be detected by physical examination when large, but the demonstration of fluid on ultrasound may assist in the early identification of dengue shock syndrome. The use of ultrasound is limited by lack of availability in many settings. Dengue shock syndrome is present if pulse pressure drops to ≤ 20 mm Hg along with peripheral vascular collapse. Peripheral vascular collapse is determined in children via delayed capillary refill, rapid heart rate, or cold extremities. While warning signs are an important aspect for early detection of potential serious disease, the evidence for any specific clinical or laboratory marker is weak.
The World Health Organization's 2009 classification divides dengue fever into two groups: uncomplicated and severe. This replaces the 1997 WHO classification, which needed to be simplified as it had been found to be too restrictive, though the older classification is still widely used including by the World Health Organization's Regional Office for South-East Asia as of 2011. Severe dengue is defined as that associated with severe bleeding, severe organ dysfunction, or severe plasma leakage while all other cases are uncomplicated. The 1997 classification divided dengue into undifferentiated fever, dengue fever, and dengue hemorrhagic fever. Dengue hemorrhagic fever was subdivided further into grades I–IV. Grade I is the presence only of easy bruising or a positive tourniquet test in someone with fever, grade II is the presence of spontaneous bleeding into the skin and elsewhere, grade III is the clinical evidence of shock, and grade IV is shock so severe that blood pressure and pulse cannot be detected. Grades III and IV are referred to as "dengue shock syndrome".
The diagnosis of dengue fever may be confirmed by microbiological laboratory testing. This can be done by virus isolation in cell cultures, nucleic acid detection by PCR, viral antigen detection (such as for NS1) or specific antibodies (serology). Virus isolation and nucleic acid detection are more accurate than antigen detection, but these tests are not widely available due to their greater cost. Detection of NS1 during the febrile phase of a primary infection may be greater than 90% sensitive however is only 60–80% in subsequent infections. All tests may be negative in the early stages of the disease. PCR and viral antigen detection are more accurate in the first seven days. In 2012 a PCR test was introduced that can run on equipment used to diagnose influenza; this is likely to improve access to PCR-based diagnosis.
These laboratory tests are only of diagnostic value during the acute phase of the illness with the exception of serology. Tests for dengue virus-specific antibodies, types IgG and IgM, can be useful in confirming a diagnosis in the later stages of the infection. Both IgG and IgM are produced after 5–7 days. The highest levels (titres) of IgM are detected following a primary infection, but IgM is also produced in reinfection. IgM becomes undetectable 30–90 days after a primary infection, but earlier following re-infections. IgG, by contrast, remains detectable for over 60 years and, in the absence of symptoms, is a useful indicator of past infection. After a primary infection IgG reaches peak levels in the blood after 14–21 days. In subsequent re-infections, levels peak earlier and the titres are usually higher. Both IgG and IgM provide protective immunity to the infecting serotype of the virus. In testing for IgG and IgM antibodies there may be cross-reactivity with other flaviviruses which may result in a false positive after recent infections or vaccinations with yellow fever virus or Japanese encephalitis. The detection of IgG alone is not considered diagnostic unless blood samples are collected 14 days apart and a greater than fourfold increase in levels of specific IgG is detected. In a person with symptoms, the detection of IgM is considered diagnostic.
Prevention depends on control of and protection from the bites of the mosquito that transmits it. The World Health Organization recommends an Integrated Vector Control program consisting of five elements:
- Advocacy, social mobilization and legislation to ensure that public health bodies and communities are strengthened;
- Collaboration between the health and other sectors (public and private);
- An integrated approach to disease control to maximize use of resources;
- Evidence-based decision making to ensure any interventions are targeted appropriately; and
- Capacity-building to ensure an adequate response to the local situation.
The primary method of controlling A. aegypti is by eliminating its habitats. This is done by getting rid of open sources of water, or if this is not possible, by adding insecticides or biological control agents to these areas. Generalized spraying with organophosphate or pyrethroid insecticides, while sometimes done, is not thought to be effective. Reducing open collections of water through environmental modification is the preferred method of control, given the concerns of negative health effects from insecticides and greater logistical difficulties with control agents. People can prevent mosquito bites by wearing clothing that fully covers the skin, using mosquito netting while resting, and/or the application of insect repellent (DEET being the most effective). However, these methods appear not to be sufficiently effective, as the frequency of outbreaks appears to be increasing in some areas, probably due to urbanization increasing the habitat of A. aegypti. The range of the disease appears to be expanding possibly due to climate change.
As of December 2015, there is no commercially available vaccine for dengue fever. One that is partially effective is predicted to become available in Mexico, the Philippines, and Brazil in early 2016. It received approval in December of 2015. The vaccine is produced by Sanofi and goes by the brand name Dengvaxia. It is based on a weakened combination of the yellow fever virus and each of the four dengue serotypes. Two studies of a vaccine found it was 60% effective and prevented more than 80 to 90% of severe cases. This is less than wished for by some.
There are ongoing programs working on a dengue vaccine to cover all four serotypes. Now that there is a fifth serotype this will need to be factored in. One of the concerns is that a vaccine could increase the risk of severe disease through antibody-dependent enhancement (ADE). The ideal vaccine is safe, effective after one or two injections, covers all serotypes, does not contribute to ADE, is easily transported and stored, and is both affordable and cost-effective.
International Anti-Dengue Day is observed every year on June 15. The idea was first agreed upon in 2010 with the first event held in Jakarta, Indonesia in 2011. Further events were held in 2012 in Yangon, Myanmar and in 2013 in Vietnam. Goals are to increase public awareness about dengue, mobilize resources for its prevention and control and, to demonstrate the Asian region’s commitment in tackling the disease.
There are no specific antiviral drugs for dengue, however maintaining proper fluid balance is important. Treatment depends on the symptoms. Those who are able to drink, are passing urine, have no "warning signs" and are otherwise healthy can be managed at home with daily follow up and oral rehydration therapy. Those who have other health problems, have "warning signs" or who cannot manage regular follow up should be cared for in hospital. In those with severe dengue care should be provided in an area where there is access to an intensive care unit.
Intravenous hydration, if required, is typically only needed for one or two days. In children with shock due to dengue a rapid dose of 20mL/kg is reasonable. The rate of fluid administration is than titrated to a urinary output of 0.5–1 mL/kg/h, stable vital signs and normalization of hematocrit. The smallest amount of fluid required to achieve this is recommended.
Invasive medical procedures such as nasogastric intubation, intramuscular injections and arterial punctures are avoided, in view of the bleeding risk. Paracetamol (acetaminophen) is used for fever and discomfort while NSAIDs such as ibuprofen and aspirin are avoided as they might aggravate the risk of bleeding. Blood transfusion is initiated early in people presenting with unstable vital signs in the face of a decreasing hematocrit, rather than waiting for the hemoglobin concentration to decrease to some predetermined "transfusion trigger" level. Packed red blood cells or whole blood are recommended, while platelets and fresh frozen plasma are usually not. There is not enough evidence to determine if corticosteroids have a positive or negative effect in dengue fever.
During the recovery phase intravenous fluids are discontinued to prevent a state of fluid overload. If fluid overload occurs and vital signs are stable, stopping further fluid may be all that is needed. If a person is outside of the critical phase, a loop diuretic such as furosemide may be used to eliminate excess fluid from the circulation.
Most people with dengue recover without any ongoing problems. The fatality rate is 1–5%, and less than 1% with adequate treatment; however those who develop significantly low blood pressure may have a fatality rate of up to 26%. Dengue is common in more than 110 countries. It infects 50 to 528 million people worldwide a year, leading to half a million hospitalizations, and approximately 20,000 deaths. For the decade of the 2000s, 12 countries in Southeast Asia were estimated to have about 3 million infections and 6,000 deaths annually. It is reported in at least 22 countries in Africa; but is likely present in all of them with 20% of the population at risk. This makes it one of the most common vector-borne diseases worldwide.
Infections are most commonly acquired in the urban environment. In recent decades, the expansion of villages, towns and cities in the areas in which it is common, and the increased mobility of people has increased the number of epidemics and circulating viruses. Dengue fever, which was once confined to Southeast Asia, has now spread to Southern China, countries in the Pacific Ocean and America, and might pose a threat to Europe.
Rates of dengue increased 30 fold between 1960 and 2010. This increase is believed to be due to a combination of urbanization, population growth, increased international travel, and global warming. The geographical distribution is around the equator. Of the 2.5 billion people living in areas where it is common 70% are from Asia and the Pacific. An infection with dengue is second only to malaria as a diagnosed cause of fever among travelers returning from the developing world. It is the most common viral disease transmitted by arthropods, and has a disease burden estimated at 1,600 disability-adjusted life years per million population. The World Health Organization counts dengue as one of seventeen neglected tropical diseases.
Like most arboviruses, dengue virus is maintained in nature in cycles that involve preferred blood-sucking vectors and vertebrate hosts. The viruses are maintained in the forests of Southeast Asia and Africa by transmission from female Aedes mosquitoes—of species other than A. aegypti—to their offspring and to lower primates. In towns and cities, the virus is primarily transmitted by the highly domesticated A. aegypti. In rural settings the virus is transmitted to humans by A. aegypti and other species of Aedes such as A. albopictus. Both these species had expanding ranges in the second half of the 20th century. In all settings the infected lower primates or humans greatly increase the number of circulating dengue viruses, in a process called amplification.
The first record of a case of probable dengue fever is in a Chinese medical encyclopedia from the Jin Dynasty (265–420 AD) which referred to a "water poison" associated with flying insects. The primary vector, A. aegypti, spread out of Africa in the 15th to 19th centuries due in part to increased globalization secondary to the slave trade. There have been descriptions of epidemics in the 17th century, but the most plausible early reports of dengue epidemics are from 1779 and 1780, when an epidemic swept across Asia, Africa and North America. From that time until 1940, epidemics were infrequent.
In 1906, transmission by the Aedes mosquitoes was confirmed, and in 1907 dengue was the second disease (after yellow fever) that was shown to be caused by a virus. Further investigations by John Burton Cleland and Joseph Franklin Siler completed the basic understanding of dengue transmission.
The marked spread of dengue during and after the Second World War has been attributed to ecologic disruption. The same trends also led to the spread of different serotypes of the disease to new areas, and to the emergence of dengue hemorrhagic fever. This severe form of the disease was first reported in the Philippines in 1953; by the 1970s, it had become a major cause of child mortality and had emerged in the Pacific and the Americas. Dengue hemorrhagic fever and dengue shock syndrome were first noted in Central and South America in 1981, as DENV-2 was contracted by people who had previously been infected with DENV-1 several years earlier.
The origins of the Spanish word dengue are not certain, but it is possibly derived from dinga in the Swahili phrase Ka-dinga pepo, which describes the disease as being caused by an evil spirit. Slaves in the West Indies having contracted dengue were said to have the posture and gait of a dandy, and the disease was known as "dandy fever".
The term "break-bone fever" was applied by physician and United States Founding Father Benjamin Rush, in a 1789 report of the 1780 epidemic in Philadelphia. In the report title he uses the more formal term "bilious remitting fever". The term dengue fever came into general use only after 1828. Other historical terms include "breakheart fever" and "la dengue". Terms for severe disease include "infectious thrombocytopenic purpura" and "Philippine", "Thai", or "Singapore hemorrhagic fever".
With regards to vector control, a number of novel methods have been used to reduce mosquito numbers with some success including the placement of the guppy (Poecilia reticulata) or copepods in standing water to eat the mosquito larvae. There are also trials with genetically modified male A. aegypti that after release into the wild mate with females, and render their offspring unable to fly.
Attempts are ongoing to infect the mosquito population with bacteria of the Wolbachia genus, which makes the mosquitoes partially resistant to dengue virus. While artificially induced infections with Wolbachia is effective, it is unclear if naturally acquired infections are protective. Working is still ongoing as of 2015 to determine the best type of Wolbachia to use.
Apart from attempts to control the spread of the Aedes mosquito there are ongoing efforts to develop antiviral drugs that would be used to treat attacks of dengue fever and prevent severe complications. Discovery of the structure of the viral proteins may aid the development of effective drugs. There are several plausible targets. The first approach is inhibition of the viral RNA-dependent RNA polymerase (coded by NS5), which copies the viral genetic material, with nucleoside analogs. Secondly, it may be possible to develop specific inhibitors of the viral protease (coded by NS3), which splices viral proteins. Finally, it may be possible to develop entry inhibitors, which stop the virus entering cells, or inhibitors of the 5′ capping process, which is required for viral replication.
- "Dengue and severe dengue Fact sheet N°117". WHO. May 2015. Retrieved 3 February 2016.
- Kularatne, SA (15 September 2015). "Dengue fever.". BMJ (Clinical research ed.) 351: h4661. PMID 26374064.
- Normile D (2013). "Surprising new dengue virus throws a spanner in disease control efforts". Science 342 (6157): 415. doi:10.1126/science.342.6157.415. PMID 24159024.
- Maron, Dina (December 30, 2015). "The mosquito-borne disease afflicts millions, and has had no approved vaccine until now". Scientific America. Retrieved 3 February 2016.
- Ranjit S, Kissoon N (January 2011). "Dengue hemorrhagic fever and shock syndromes". Pediatr. Crit. Care Med. 12 (1): 90–100. doi:10.1097/PCC.0b013e3181e911a7. PMID 20639791.
- Gubler DJ (July 1998). "Dengue and dengue hemorrhagic fever". Clin. Microbiol. Rev. 11 (3): 480–96. PMC 88892. PMID 9665979.
- Whitehorn J, Farrar J (2010). "Dengue". Br. Med. Bull. 95: 161–73. doi:10.1093/bmb/ldq019. PMID 20616106.
- Bhatt S, Gething PW, Brady OJ, et al. (April 2013). "The global distribution and burden of dengue". Nature 496 (7446): 504–7. doi:10.1038/nature12060. PMC 3651993. PMID 23563266.
- Carabali, M; Hernandez, LM; Arauz, MJ; Villar, LA; Ridde, V (30 July 2015). "Why are people with dengue dying? A scoping review of determinants for dengue mortality.". BMC infectious diseases 15: 301. PMID 26223700.
- Henchal EA, Putnak JR (October 1990). "The dengue viruses". Clin. Microbiol. Rev. 3 (4): 376–96. doi:10.1128/CMR.3.4.376. PMC 358169. PMID 2224837.
- Noble CG, Chen YL, Dong H; et al. (March 2010). "Strategies for development of Dengue virus inhibitors". Antiviral Res. 85 (3): 450–62. doi:10.1016/j.antiviral.2009.12.011. PMID 20060421.
- WHO (2009), pp. 14–16.
- Reiter P (11 March 2010). "Yellow fever and dengue: a threat to Europe?". Euro Surveill 15 (10): 19509. PMID 20403310.
- Gubler DJ (2010). "Dengue viruses". In Mahy BWJ, Van Regenmortel MHV. Desk Encyclopedia of Human and Medical Virology. Boston: Academic Press. pp. 372–82. ISBN 0-12-375147-0.
- Varatharaj A (2010). "Encephalitis in the clinical spectrum of dengue infection". Neurol. India 58 (4): 585–91. doi:10.4103/0028-3886.68655. PMID 20739797.
- Simmons CP, Farrar JJ, Nguyen vV, Wills B (April 2012). "Dengue". N Engl J Med 366 (15): 1423–32. doi:10.1056/NEJMra1110265. PMID 22494122.
- WHO (2009), pp. 25–27.
- Chen LH, Wilson ME (October 2010). "Dengue and chikungunya infections in travelers". Current Opinion in Infectious Diseases 23 (5): 438–44. doi:10.1097/QCO.0b013e32833c1d16. PMID 20581669.
- Wolff K, Johnson RA (eds.) (2009). "Viral infections of skin and mucosa". Fitzpatrick's color atlas and synopsis of clinical dermatology (6th ed.). New York: McGraw-Hill Medical. pp. 810–2. ISBN 978-0-07-159975-7.
- Knoop KJ, Stack LB, Storrow A, Thurman RJ (eds.) (2010). "Tropical medicine". Atlas of emergency medicine (3rd ed.). New York: McGraw-Hill Professional. pp. 658–9. ISBN 0-07-149618-1.
- Gould EA, Solomon T (February 2008). "Pathogenic flaviviruses". The Lancet 371 (9611): 500–9. doi:10.1016/S0140-6736(08)60238-X. PMID 18262042.
- Rodenhuis-Zybert IA, Wilschut J, Smit JM (August 2010). "Dengue virus life cycle: viral and host factors modulating infectivity". Cell. Mol. Life Sci. 67 (16): 2773–86. doi:10.1007/s00018-010-0357-z. PMID 20372965.
- Carod-Artal FJ, Wichmann O, Farrar J, Gascón J (September 2013). "Neurological complications of dengue virus infection". Lancet Neurol 12 (9): 906–19. doi:10.1016/S1474-4422(13)70150-9. PMID 23948177.
- Guzman MG, Halstead SB, Artsob H, et al. (December 2010). "Dengue: a continuing global threat". Nature Reviews Microbiology 8 (12 Suppl): S7–S16. doi:10.1038/nrmicro2460. PMID 21079655.
- Solomonides, Tony (2010). Healthgrid applications and core technologies : proceedings of HealthGrid 2010 ([Online-Ausg.]. ed.). Amsterdam: IOS Press. p. 235. ISBN 978-1-60750-582-2.
- WHO (2009), pp. 59–64.
- Global Strategy For Dengue Prevention And Control (PDF). World Health Organization. 2012. pp. 16–17. ISBN 978-92-4-150403-4.
- "Travelers' Health Outbreak Notice". Centers for Disease Control and Prevention. 2 June 2010. Archived from the original on 26 August 2010. Retrieved 27 August 2010.
- "Vector-borne viral infections". World Health Organization. Retrieved 17 January 2011.
- Center for Disease Control and Prevention. "Chapter 5 – dengue fever (DF) and dengue hemorrhagic fever (DHF)". 2010 Yellow Book. Retrieved 23 December 2010.
- St. Georgiev, Vassil (2009). National Institute of Allergy and Infectious Diseases, NIH. (1 ed.). Totowa, N.J.: Humana. p. 268. ISBN 978-1-60327-297-1.
- Wilder-Smith A, Chen LH, Massad E, Wilson ME (January 2009). "Threat of dengue to blood safety in dengue-endemic countries". Emerg. Infect. Dis. 15 (1): 8–11. doi:10.3201/eid1501.071097. PMC 2660677. PMID 19116042.
- Stramer SL, Hollinger FB, Katz LM, et al. (August 2009). "Emerging infectious disease agents and their potential threat to transfusion safety". Transfusion. 49 Suppl 2: 1S–29S. doi:10.1111/j.1537-2995.2009.02279.x. PMID 19686562.
- Teo D, Ng LC, Lam S (April 2009). "Is dengue a threat to the blood supply?". Transfus Med 19 (2): 66–77. doi:10.1111/j.1365-3148.2009.00916.x. PMC 2713854. PMID 19392949.
- Wiwanitkit V (January 2010). "Unusual mode of transmission of dengue". Journal of Infection in Developing Countries 4 (1): 51–4. doi:10.3855/jidc.145. PMID 20130380.
- Martina BE, Koraka P, Osterhaus AD (October 2009). "Dengue virus pathogenesis: an integrated view". Clin. Microbiol. Rev. 22 (4): 564–81. doi:10.1128/CMR.00035-09. PMC 2772360. PMID 19822889.
- WHO (2009), pp. 10–11.
- Halstead, Scott B. (2008). Dengue. London: Imperial College Press. p. 180 & 429. ISBN 978-1-84816-228-0.
- WHO (2009), pp. 90–95.
- Musso, D.; Nilles, E.J.; Cao-Lormeau, V.-M. "Rapid spread of emerging Zika virus in the Pacific area". Clinical Microbiology and Infection 20 (10): O595–O596. doi:10.1111/1469-0691.12707.
- Yacoub, Sophie; Wills, Bridget (2014). "Predicting outcome from dengue". BMC Medicine 12 (1): 147. doi:10.1186/s12916-014-0147-9. PMC 4154521. PMID 25259615.
- Comprehensive guidelines for prevention and control of dengue and dengue haemorrhagic fever. (PDF) (Rev. and expanded. ed.). New Delhi, India: World Health Organization Regional Office for South-East Asia. 2011. p. 17. ISBN 978-92-9022-387-0.
- WHO (1997). "Chapter 2: clinical diagnosis". Dengue haemorrhagic fever: diagnosis, treatment, prevention and control (PDF) (2nd ed.). Geneva: World Health Organization. pp. 12–23. ISBN 92-4-154500-3.
- Wiwanitkit, V (July 2010). "Dengue fever: diagnosis and treatment". Expert review of anti-infective therapy 8 (7): 841–5. doi:10.1586/eri.10.53. PMID 20586568.
- "New CDC test for dengue approved". Centers for Disease Control and Prevention. 20 June 2012.
- WHO (2009) p. 137–146.
- Pollack, Andrew (2015-12-09). "First Dengue Fever Vaccine Approved by Mexico". The New York Times. ISSN 0362-4331. Retrieved 2015-12-10.
- "Dengvaxia®, World’s First Dengue Vaccine, Approved in Mexico". www.sanofipasteur.com. Retrieved 2015-12-10.
- Guy B, Barrere B, Malinowski C, Saville M, Teyssou R, Lang J (September 2011). "From research to phase III: preclinical, industrial and clinical development of the Sanofi Pasteur tetravalent dengue vaccine". Vaccine 29 (42): 7229–41. doi:10.1016/j.vaccine.2011.06.094. PMID 21745521.
- Villar, Luis; Dayan, Gustavo Horacio; Arredondo-García, José Luis; Rivera, Doris Maribel; Cunha, Rivaldo; Deseda, Carmen; Reynales, Humberto; Costa, Maria Selma; Morales-Ramírez, Javier Osvaldo; Carrasquilla, Gabriel; Rey, Luis Carlos; Dietze, Reynaldo; Luz, Kleber; Rivas, Enrique; Montoya, Maria Consuelo Miranda; Supelano, Margarita Cortés; Zambrano, Betzana; Langevin, Edith; Boaz, Mark; Tornieporth, Nadia; Saville, Melanie; Noriega, Fernando (3 November 2014). "Efficacy of a Tetravalent Dengue Vaccine in Children in Latin America". New England Journal of Medicine 372 (2): 141103114505002. doi:10.1056/NEJMoa1411037. PMID 25365753.
- Villar, L; Dayan, GH; Arredondo-García, JL; Rivera, DM; Cunha, R; Deseda, C; Reynales, H; Costa, MS; Morales-Ramírez, JO; Carrasquilla, G; Rey, LC; Dietze, R; Luz, K; Rivas, E; Miranda Montoya, MC; Cortés Supelano, M; Zambrano, B; Langevin, E; Boaz, M; Tornieporth, N; Saville, M; Noriega, F; CYD15 Study, Group (8 January 2015). "Efficacy of a tetravalent dengue vaccine in children in Latin America.". The New England Journal of Medicine 372 (2): 113–23. doi:10.1056/nejmoa1411037. PMID 25365753.
- Webster DP, Farrar J, Rowland-Jones S (November 2009). "Progress towards a dengue vaccine". Lancet Infect Dis 9 (11): 678–87. doi:10.1016/S1473-3099(09)70254-3. PMID 19850226.
- "Marking ASEAN Dengue Day". Retrieved 16 June 2015.
- ACTION AGAINST DENGUE Dengue Day Campaigns Across Asia. World Health Organization. 2011. ISBN 9789290615392.
- WHO (2009), pp. 32–37.
- de Caen, AR; Berg, MD; Chameides, L; Gooden, CK; Hickey, RW; Scott, HF; Sutton, RM; Tijssen, JA; Topjian, A; van der Jagt, ÉW; Schexnayder, SM; Samson, RA (3 November 2015). "Part 12: Pediatric Advanced Life Support: 2015 American Heart Association Guidelines Update for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care.". Circulation 132 (18 Suppl 2): S526–42. doi:10.1161/CIR.0000000000000266. PMID 26473000.
- WHO (2009), pp. 40–43.
- Zhang, F; Kramer, CV (1 July 2014). "Corticosteroids for dengue infection.". The Cochrane database of systematic reviews 7: CD003488. doi:10.1002/14651858.CD003488.pub3. PMID 24984082.
- Shepard DS, Undurraga EA, Halasa YA (2013). Gubler, Duane J, ed. "Economic and disease burden of dengue in Southeast Asia". PLoS Negl Trop Dis 7 (2): e2055. doi:10.1371/journal.pntd.0002055. PMC 3578748. PMID 23437406.
- Amarasinghe, A; Kuritsk, JN; Letson, GW; Margolis, HS (August 2011). "Dengue virus infection in Africa.". Emerging Infectious Diseases 17 (8): 1349–54. doi:10.3201/eid1708.101515. PMC 3381573. PMID 21801609.
- WHO (2009), p. 3.
- Neglected Tropical Diseases. "The 17 neglected tropical diseases". World Health Organization. Retrieved 10 April 2013.
- Anonymous (2006). "Etymologia: dengue" (PDF). Emerg. Infec. Dis. 12 (6): 893. doi:10.3201/eid1206.ET1206.
- Anonymous (15 June 1998). "Definition of Dandy fever". MedicineNet.com. Retrieved 25 December 2010.
- Halstead SB (2008). Dengue (Tropical Medicine: Science and Practice). River Edge, N.J: Imperial College Press. pp. 1–10. ISBN 1-84816-228-6.
- Barrett AD, Stanberry LR (2009). Vaccines for biodefense and emerging and neglected diseases. San Diego: Academic. pp. 287–323. ISBN 0-12-369408-6.
- WHO (2009), p. 71.
- Fong, I (2013). Challenges in Infectious Diseases. Springer. p. 219. ISBN 978-1-4614-4496-1.
- "'Bug' could combat dengue fever". BBC NEWS (British Broadcasting Corporation). 2 January 2009.
- Johnson, KN (4 November 2015). "The Impact of Wolbachia on Virus Infection in Mosquitoes.". Viruses 7 (11): 5705–17. doi:10.3390/v7112903. PMC 4664976. PMID 26556361.
- Lambrechts, L; Ferguson, NM; Harris, E; Holmes, EC; McGraw, EA; O'Neill, SL; Ooi, EE; Ritchie, SA; Ryan, PA; Scott, TW; Simmons, CP; Weaver, SC (July 2015). "Assessing the epidemiological effect of wolbachia for dengue control.". The Lancet. Infectious diseases 15 (7): 862–6. doi:10.1016/S1473-3099(15)00091-2. PMID 26051887.
- Sampath A, Padmanabhan R (January 2009). "Molecular targets for flavivirus drug discovery". Antiviral Res. 81 (1): 6–15. doi:10.1016/j.antiviral.2008.08.004. PMC 2647018. PMID 18796313.
- Tomlinson SM, Malmstrom RD, Watowich SJ (June 2009). "New approaches to structure-based discovery of dengue protease inhibitors". Infectious Disorders Drug Targets 9 (3): 327–43. doi:10.2174/1871526510909030327. PMID 19519486.
- WHO (2009). Dengue Guidelines for Diagnosis, Treatment, Prevention and Control (PDF). Geneva: World Health Organization. ISBN 92-4-154787-1.
|A version of this article has passed formal peer review and been published in the academic journal Open Medicine. The published version can be seen here. The peer review here.|
Find more about
at Wikipedia's sister projects
|Definitions from Wiktionary|
|Media from Commons|
|News stories from Wikinews|
|Quotations from Wikiquote|
|Source texts from Wikisource|
|Textbooks from Wikibooks|
|Travel guide from Wikivoyage|
|Learning resources from Wikiversity|
- Dengue fever at DMOZ
- "Dengue". WHO. Retrieved 27 June 2011.
- "Dengue". U.S. Centers for Disease Control and Prevention. Retrieved 27 June 2011.
- "Dengue fever". UK Health Protection Agency. Retrieved 27 June 2011.
- "DengueMap". U.S. Centers for Disease Control and Prevention/HealthMap. Retrieved 27 June 2011. | https://en.wikipedia.org/wiki/Dengue_fever |
4.25 | How to multiply two complex numbers in trigonometric form.
Understanding squaring and square roots, including decimal approximations
How to classify different types of real numbers.
How to solve equations using square roots or cube roots.
How to use the discriminant to know the number and type of solutions to a quadratic.
How to express a complex number in trigonometric form.
Finding decimal approximations of square roots without a calculator
How to solve a problem involving consecutive integers.
Multiplying or dividing with mixed numbers
How the number e is defined.
How to determine the number of handshakes possible in a group of a certain number of people.
How to plot complex numbers on the complex plane.
How to determine the number of diagonals in a polygon.
How to define the properties of real numbers.
How to convert a complex number from rectangular form to trigonometric form.
Vocabulary of improper fractions and mixed numbers
The meaning and functionality of Avogadro's number in chemistry.
How to know when to include a plus and minus on a square root problem. | https://www.brightstorm.com/tag/number-of-roots/page/2 |
4.09375 | How Plants Cope with the Desert Climate
from sonorensis, Volume 17, Number 1 (Spring 1997)
Agave (above) and other succulent plants store water in fleshy leaves, stems or roots
Mark A. Dimmitt
Director of Natural History
Arizona-Sonora Desert Museum
Desert plants tend to look very different from plants native to other regions. They are often swollen, spiny, and have tiny leaves that are rarely bright green. Their strange appearance is a result of their remarkable adaptations to the challenges of the desert climate. Aridity is the sole factor that defines a desert and is the primary limitation to which desert organisms must adapt.
Desert plants have developed three main adaptive strategies: succulence, drought tolerance and drought avoidance. Each of these is a different but effective suite of adaptations for prospering under conditions that would kill plants from other regions.
Succulent plants store water in fleshy leaves, stems or roots. All cacti are succulents, as are such non-cactus desert dwellers as agave, aloe, elephant trees, and many euphorbias. Several other adaptations are essential for the water storing habit to be effective.
Owl's clover, California poppy and other drought avoidance plants die after channeling all their energy into producing seeds
A succulent must be able to absorb large quantities of water in short periods.Desert rains are often light and brief, and the soil dries rapidly under an intense sun. To cope with these conditions, nearly all succulents have extensive, shallow root systems. The roots of a saguaro extend horizontally about as far as the plant is tall but are rarely more than four inches (10 cm) deep. The water-absorbing roots are mostly within the upper half inch (1.3 cm).
Succulents must be able to maintain their water hoards in a desiccating environment and use it as efficiently as possible. The stems and leaves of most species have waxy cuticles that render them nearly waterproof when the stomates are closed. Water is further conserved by reduced surface areas; most succulents have few leaves (agaves), no leaves (most cacti), or leaves that are deciduous in dry seasons (elephant trees, ocotillos, boojums).
Many succulents, as well as semisucculents such as most yuccas, epiphytic orchids, and xerophytic bromeliads, possess a water-efficient variant of photosynthesis called CAM, an acronym for Crassulacean Acid Metabolism. CAM plants open their stomates for gas exchange at night and store carbon dioxide. By day, while the stomates are closed, photosynthesis is conducted using the stored carbon dioxide. Because of the lower temperatures and higher humidity at night, CAM plants lose one-tenth as much water per unit of carbohydrate synthesized as standard C3 plants.
Another valuable attribute of CAM plants is their capability for idling metabolism during droughts. When CAM plants become water-stressed, the stomates remain closed both day and night; gas exchange and water loss nearly cease. The plant, however, maintains a low level of metabolism in the still-moist tissues. Just as an idling engine can rev up to full speed more quickly than a cold one, an idling CAM plant can resume full growth in 24 to 48 hours after a rain. Therefore, succulents can take rapid advantage of ephemeral surface moisture.
Stored water in an arid environment requires protection from thirsty animals. Most succulent plants are spiny or toxic, often both. Some protect themselves by growing only in inaccessible locations. Still others rely on camouflage. Arizona night blooming cereus, for example, closely resembles the dry stems of the shrubs in which it grows.
Drought tolerance (or drought dormancy) refers to a plant's ability to withstand desiccation without dying. Plants in this category often shed leaves during dry periods and enter a deep dormancy. Most water loss is from transpiration through leaf surfaces, so dropping leaves conserves water in the stems. Some plants that do not normally shed their leaves have resinous coatings that retard water loss (e.g., creosote bush).
The roots of drought tolerant shrubs and trees are extensive compared to those of plants in wetter climates, covering an area up to twice the diameter of the canopy. They exploit the soil at greater depth than the roots of succulents; sometimes they extend to extreme depths (e.g., mesquite). Most of a mesquite's roots, however, are within three feet (0.9 m) of the surface.
Rooting depth controls opportunities for growth cycles. In contrast to the succulents' shallow-rooted strategy, a substantial rain is required to wet the deeper root zone of shrubs and trees. After a soaking rain has fallen, shrubs such as brittlebush and creosote take a few weeks to resume full growth from deep dormancy. The tradeoff between this strategy and that of succulents is that once the deeper soil is wetted by several rains it stays moist much longer than the surface layer, supporting several weeks of growth.
Succulents can absorb water only when the soil is nearly saturated. In contrast drought tolerant plants can absorb water from soil that is much drier. Similarly these plants can photosynthesize with low leaf moisture contents that would prove fatal to most plants.
Drought tolerant plants, such as this brittlebush, often shed leaves during dry periods and enter a deep dormancy
Annual plants escape unfavorable conditions by not existing. They mature in a single season, then die after channeling all of their life energy into producing seeds instead of reserving some for continued survival.
Most Sonoran Desert annuals will germinate only during a narrow window in the fall, after summer heat has waned and before winter cold arrives. During this window of opportunity there must be a soaking rain of at least one inch for most species. This combination of requirements is survival insurance: an inch of rain in the mild weather of fall will provide enough soil moisture that the germinating seeds will probably mature and produce seeds even if almost no more rain falls in that season. There is still further insurance: even under the best conditions not all of the seeds will germinate; some remain dormant. Although the mechanisms are not known, a percentage of any year's crop of desert lupine seeds will not germinate until they are ten years old.
Seedlings rapidly produce rosettes of leaves during the mild fall weather, remain flat against the ground as they grow more slowly through the winter, and bolt into flower in the spring. Since the plants are inconspicuous until they begin the spring bolt, many people mistakenly think that spring rains produce our wildflower displays.
Annuals are common only in communities that have dry seasons, where the spacing of perennial plants is determined by the rooting space required to obtain enough moisture to survive the driest years. In the occasional wetter years both open space and moisture are available to be exploited by a population of fast-growing annuals. The more arid the habitat, the greater the proportion of annual species. Half of the Sonoran Desert's flora is comprised of annual species. In the driest habitats up to 90% of the plants are annuals.
The desert environment may seem hostile, but this is purely an outsider's viewpoint. Adaptations enable indigenous plants and animals not merely to survive here, but to thrive most of the time. | http://www.desertmuseum.org/programs/succulents_adaptation.php |
4.09375 | Dyslexia refers to a reading disorder in which reading is difficult despite normal intelligence. Children and adults who struggle with dyslexia can experience problems spelling and writing words, “sounding out” words, reading quickly, reading aloud, and/or understanding what’s been read.
Dyslexia is considered a cognitive disorder, and is often linked with cognitive weakness, particularly in the area of auditory processing.
Auditory processing has been described as “what the brain does with what the ears hear.” Weak auditory processing skills hinder the brain’s ability to recognize the difference between sounds, blend sounds, and link sounds to letters, making learning to read—and reading—more difficult.
Scientists are still researching dyslexia to better understand the diagnosis. In one study, neuroscience student Emily Finn and her colleagues at the Yale University School of Medicine conducted a whole-brain functional connectivity analysis of dyslexia using functional magnetic resonance imaging. Scans of children and adults with dyslexia were compared to typical readers in the same age groups and the results, as reported in Biological Psychiatry, showed that there were widespread differences. Dyslexic readers showed decreased connectivity within their visual pathways, as well as between visual and prefrontal regions. Dyslexia readers also showed reduced connectivity in the visual word-form area.
Someone diagnosed with dyslexia may benefit from intensive one-on-one attention. Some families turn to brain training to identify, target, and train cognitive weaknesses commonly associated with dyslexia.
You or your child may or may not see improvements in cognitive skills after LearningRx brain training programs.
If you have a child who is struggling with reading, there are some things you can do help your child improve skills that are critical to reading success. For example, to work on phonemic awareness and auditory processing skills, try these exercises:
- Sound segmenting games: Say a two-sound word, like bee, and have the child tell you which sounds are in the word. Then start to increase to three-sound words, like cat. This builds auditory segmenting which is necessary for spelling when children get older.
- Phonetics using building blocks: Help develop analytical skills by using blocks to make up nonsense words starting with two to three blocks. Create a nonsense word, then have the child remove one block and add a new one while verbally trying to figure out what the new word sounds like.
It is also important to have your child cognitively assessed. A cognitive test is the best way to identify weak skills that may be affecting a child’s ability to process sounds and/or read. LearningRx Brain Training Centers offer comprehensive cognitive testing at a reasonable price. Click here to locate a LearningRx Center near you.
I'm not ready to chat yet,
but I'd like to stay in touch by email. | http://www.learningrx.com/dyslexia |
4.21875 | An interstellar ark or spaceark is a conceptual space vehicle that some have speculated could be used for interstellar travel. Interstellar arks may be the most economically feasible method of traveling such distances. The ark has also been proposed as a potential habitat to preserve civilization and knowledge in the event of a global catastrophe.
Such a ship would have to be large, requiring a large power plant. The Project Orion concept of propulsion by nuclear pulses has been proposed. The largest spacecraft design analyzed in Project Orion had a 400 m diameter and weighed approximately 8 million tons. It could be large enough to host a city of 100,000 or more people.
Another concern is selection of power sources and mechanisms which would remain viable for the long time spans involved in interstellar travel through the desert of space. The longest lived space probes are the Voyager program probes, which use radioisotope thermoelectric generators having a useful lifespan of a mere 50 years.
One of the quickest thrust types for a crewed spacecraft could be propulsion by fusion microexplosion nuclear pulse propulsion system, like that found in Project Daedalus that may allow it to obtain an interstellar cruising velocity of up to 10% of the speed of light. However, if the ship is capable of transits requiring hundreds of thousands of years, chemical and gravitational slingshot propulsion may be sufficient.
Specific proposals and research projects
In 1964 Robert Enzmann proposed a large fusion-powered spacecraft that could function as an interstellar ark, supporting a crew of 200 with extra space for expansion, on multi-year journeys at subluminal speeds to nearby star systems.
In 1955 Project Orion considered nuclear propulsion for spacecraft, suitable for deep space voyages.
- The 1933 novel When Worlds Collide is one of the earliest examples of an interstellar ark. To save humanity from extinction when a star is about to destroy Earth, a group of astronomers construct a massive spaceship to carry forty humans, in addition to livestock and equipment, to a new planet.
- Jack Williamson's 1934 story "Born of the Sun" is another early example, in which planets are revealed to be no more than eggs for immense creatures. A steel magnate and his geologist-astrophysicist uncle create an "ark of space" to preserve the human race in the six months left in the Earth's existence. The ark is designed to hold two thousand people, be powered off of cosmic rays, and to recycle water and waste to create synthetic food and air, thus providing for an unlimited survival of its crew in space.
- A group of three large arks served as the homes and battleships of the space-faring Thraki race in William C. Dietz's Legion of the Damned series.
- The concept of an interstellar ark was used humorously in the cult sci-fi classic The Hitchhiker's Guide to the Galaxy, in the form of the B-Ark of the Golgafrincham Ark Fleet, filled to capacity with cryosleeping advertising executives, management consultants, telephone sanitisers, and other "undesirables" whom the Golgafrinchams wanted to expel.
- The Starlost is a television series about a generation ship lost in space, whose inhabitants had forgotten that they were on a ship.
- In the PC game Outpost series, an interstellar ark named Conestoga was used to evacuate a population of humans from the impending destruction of Earth.
- The PC game Alien Legacy features "seedships", used to spread mankind due to an interstellar war that might wipe out the Earth.
- The SS Botany Bay was a sleeper ship used by Khan Noonien Singh in the Star Trek first season episode "Space Seed".
- A Star Trek third season episode entitled "For the World Is Hollow and I Have Touched the Sky" takes place on a hollow asteroid generation 'ship' called Yonada.
- The Centauri Princess is a cylindrical interstellar ark peopled with humans, depicted in elaborate detail in the novel First Ark to Alpha Centauri by A. Ahad
- The cylindrical generation ship Vanguard serves as the centerpiece of Robert A. Heinlein's 1963 novel Orphans of the Sky, a combination of two shorter 1941 works.
- In the animated Disney-Pixar film WALL-E, the space vessel Axiom, originally intended as a temporary dwelling for humanity for a 5-year period during which robots were to clean up an environmentally devastated Earth, becomes a de facto ark housing multiple generations of humans over a period of 700 years.
- In Stargate Atlantis third season episode "The Ark", Colonel Sheppard's team discovers a facility inside a hollowed-out moon that turns out to be an ark created by the people of the planet around which the moon is in orbit. The ark was built to preserve the existence of the people from the planet and rebuild its civilization after a Wraith defeat. People were stored in stasis in the ark using Wraith beaming technology. The government then waged an unwinnable war against the Wraith, and purposely decimated the remainder of their own population with atomic bombs, leading the Wraith to believe that these people are extinct.
- Episode 30 of the radio drama Dimension X, "Universe", featured a seed ship whose human population had split into the lower deck inhabitants and the upper deck inhabitants. The upper deck inhabitants were mutated by radiation leaking through the ship's hull. The inhabitants were not aware they were on a ship and believed the vessel contained the entirety of the universe. The episode was written by Robert A. Heinlein.
- In the Warhammer 40,000 fictional universe, the Eldar race live and travel aboard interstellar arks, Craftworlds
- In Rendezvous with Rama, by Arthur C. Clarke, and later, followed by three more books, this time, co-written with JPL Space Director and novelist, Gentry Lee, an alien interstellar ark transits through Earth's solar system. When a second ship flies round the Earth, (or is it the first ship again?), a trio of humans are “ensnared” by happenings inside the city-sized ship, who then ride the giant craft to the far reaches of the galaxy; constantly on man's best known quest: “what is all this and why are we here?”
- New Zealand author Ken Catran's Deepwater novel trilogy is a science fiction adventure for young adults about a massive generation ship Deepwater Black created by the dying population of a virus stricken Earth. The main inhabitants are teen clones all created from gene donors, all of which contributed to Earth society in phenomenal ways. As they travel on, they must survive the unknowns of space as well as the insecurities of being clones with the genetic memories of their dead donors constantly resurfacing in their heads. The novels were made into a 13 episode series for Yorkshire Television and SciFi Channel in 1997
- In Hull Zero Three, by Greg Bear (2010), an interstellar ark is equipped with a library of biological forms and generates customized individuals as needed during its voyage.
- The 2005 novel Building Harlequin's Moon by Larry Niven and Brenda Cooper follows the story of one of three arks that flee a dying solar system (due to A.I. and nanotechnology that has gone bad). Most of the inhabitants are in suspended animation, but they are forced to stop and build a society to create enough anti-matter to continue the trip.
- The 2009 film Pandorum is set on an interstellar ark called Elysium. Originally sent into space to seed a new world since Earth was overpopulated, the ship has fallen into disarray and is largely overrun by a cannibalistic humans subspieces who evolved from passengers who went insane after Earth had mysteriously disappeared.
- Elysium (film) takes place on both a ravaged Earth, and a luxurious space habitat called Elysium. It explores political and sociological themes such as immigration, overpopulation, health care, exploitation, the justice system, and class issues.
- R.W. Moir and W.L. Barr (2005) "Analysis of Interstellar Spacecraft Cycling between the Sun and the Near Stars" Journal of the British Interplanetary Society 58, pp.332–341
- Frederik Ceyssens, Maarten Driesen, Kristof Wouters, Pieter-Jan Ceyssens, Lianggong Wen (2011) "Organizing and financing interstellar space projects – A bottom-up approach" (DARPA/NASA, Orlando, Florida: 100 Year Starship conference)
- Ian Ridpath – Messages from the stars: communication and contact with extraterrestrial life (1978, Harper & Row, 241 pages) = Google Books 2010, Snippet View: "As long ago as 1964, Robert D. Enzmann of the Raytheon Corporation proposed an interstellar ark driven by eight nuclear pulse rockets. The living quarters of the starship, habitable by 200 people but with room for growth ..." | https://en.wikipedia.org/wiki/Interstellar_ark |
4.03125 | In astronomy, precovery (short for "pre-discovery recovery") is the process of finding the image of an object in old archived images or photographic plates for the purpose of calculating a more accurate orbit. This happens most often with minor planets, but sometimes a comet, a dwarf planet, a natural satellite, or a star is found in old archived images; even exoplanet precovery observations have been obtained. While the term "precovery" refers to a pre-discovery image, "recovery" refers to imaging of a body which was lost to our view (as behind the Sun), but is now visible again (also see lost minor planets).
Calculating the orbit of an astronomical object involves measuring its position on multiple occasions. The more widely separated these are in time, the more accurately the orbit can be calculated. However, for a newly discovered object, only a few days' or weeks' worth of measured positions may be available, which is only sufficient for a preliminary (imprecise) orbit calculation.
When an object is of particular interest (such as asteroids with a chance of impacting Earth), researchers begin a search for precovery images. Using the preliminary orbit calculation to predict where the object might appear on old archival images, those images (sometimes decades old) are searched to see if it had been in fact photographed already. If so, a far more precise orbital calculation can then be made.
Until fast computers were widely available, it was impractical to analyze and measure images for possible minor planet discoveries because this involved a considerable amount of manual labor. Usually, such images were made years or decades earlier for other purposes (studies of galaxies, etc.), and it was not worth the time it took to look for precovery images of ordinary asteroids. Today, computers can easily analyze digital astronomical images and compare them to star catalogs containing up to a billion or so star positions to see if one of the "stars" is actually a precovery image of the newly discovered object. This technique has been used since the mid-1990s to determine the orbits of an enormous number of minor planets.
In an extreme case of precovery, an object was discovered on December 31, 2000, designated 2000 YK66, and a near-Earth orbit was calculated. Precovery revealed that it had previously been discovered on February 23, 1950 and given the provisional designation 1950 DA, and then been lost for half a century. The exceptionally long observation period allowed an unusually precise orbit calculation, and the asteroid was determined to have a small chance of colliding with the Earth. After an asteroid's orbit is calculated with sufficient precision, it can be assigned a number (in this case, (29075) 1950 DA).
The asteroid 69230 Hermes was found in 2003 and numbered, but was found to be a discovery from 1937 which had even been named, but subsequently lost. Consequently, its old name "Hermes" has been applied to it. Centaur 2060 Chiron was discovered in 1977, and precovery images from 1895 have been located.
Another extreme case of precovery concerns Neptune. Galileo observed Neptune on both December 28, 1612 and January 27, 1613, when it was in a portion of its orbit where it was nearly directly behind Jupiter as seen from Earth. Because Neptune moves very slowly and is very faint relative to other known planets of that time, Galileo mistook it for a fixed star, leaving the planet undiscovered until 1846. He did note that the "star" Neptune did seem to move, noting that between his two observations its apparent distance from another star had changed. However, unlike photographic images, drawings such as those Galileo made are usually not precise enough to be of use in refining an object's orbit. In 1795, Lalande also mistook Neptune for a star. In 1690, John Flamsteed did the same with Uranus, even cataloging it as "34 Tauri".
- McNaught, R. H.; Steel, D. I.; Russell, K. S.; Williams, G. V. (March 7–11, 1994). "Near-Earth Asteroids on Archival Schmidt Plates". In Jessica Chapman; Russell Cannon; Sandra Harrison; Bambang Hidayat. Proceedings, The future utilisation of Schmidt telescopes. IAU Colloquium 148. Bandung, Indonesia: Astronomical Society of the Pacific. p. 170. Bibcode:1995ASPC...84..170M. Retrieved 2010-02-13.
- AANEAS: A Valedictory Report by D.I. Steel, R.H. McNaught, G.J. Garradd, D.J. Asher and K.S. Russell, 25 March 1997
- Villard, Ray; Lafreniere, David (April 1, 2009). "Hubble Finds Hidden Exoplanet in Archival Data". HubbleSite NewsCenter. NASA. Retrieved 2009-04-03.
- "JPL Small-Body Database Browser: 2060 Chiron (1977 UB)" (2009-09-17 last obs). Retrieved 2010-02-08.
- Fred William Price (2000). The planet observer's handbook. Cambridge University Press. p. 352. Retrieved 2009-09-11.
- Wild, W. J.; Buchwald, G.; Dimario, M. J.; Standish, E. M. (December 1998). "Serendipitous Discovery of the Oldest Known Photographic Plates with Images of Pluto". American Astronomical Society 30: 1449. Bibcode:1998DPS....30.5514W. Retrieved December 2015.
- "JPL Small-Body Database Browser: 38628 Huya (2000 EB173)" (2009-06-13 last obs). Retrieved 2010-02-09.
- "JPL Small-Body Database Browser: 28978 Ixion (2001 KX76)" (2009-05-21 last obs). Retrieved 2010-02-08.
- "JPL Small-Body Database Browser: 50000 Quaoar (2002 LM60)" (2009-09-12 last obs). Retrieved 2010-02-08.
- "JPL Small-Body Database Browser: (2002 MS4)". 2011-12-12. Retrieved 2015-01-28.
- "JPL Small-Body Database Browser: 90377 Sedna (2003 VB12)" (2010-01-05 last obs). Retrieved 2010-02-08.
- "JPL Small-Body Database Browser: 90482 Orcus (2004 DW)" (2009-04-28 last obs). Retrieved 2010-02-08.
- "JPL Small-Body Database Browser: 136108 Haumea (2003 EL61)" (2010-01-26 last obs). Retrieved 2010-02-08.
- "JPL Small-Body Database Browser: 136199 Eris (2003 UB313)" (2009-11-20 last obs). Retrieved 2010-02-08.
- "JPL Small-Body Database Browser: 136472 Makemake (2005 FY9)" (2010-01-26 last obs). Retrieved 2010-02-08.
- "JPL Small-Body Database Browser: 225088 (2007 OR10)" (2009-10-19 last obs). Retrieved 2010-02-08.
- "JPL Small-Body Database Browser: 2013 FZ27)" (2014-03-26 last obs). Retrieved 2015-04-13.
|Look up precovery in Wiktionary, the free dictionary.| | https://en.wikipedia.org/wiki/Precovery |
4.03125 | Where does Sukkot come from?
Sukkot (Feast of Tabernacles) is one of the three (God ordained) festivals during which the Jews made pilgrimages to the Temple in Jerusalem.
Exodus 23:16 mentions the three Feasts, Deuteronomy 16:13 specifically tells the people when and how to celebrate, while Leviticus 23:40-44 gives specific instructions what to do on the first day of the Feast.
John 7 gives the account of Jesus celebrating the Feast of Tabernacles.
Historical and agricultural significance
The three pilgrims feast - Pesach (Passover), Shavuot (Pentecost) and Sukkot (Tabernacles) have both historical and agricultural significance.
But because Sukkot occurred in the fall harvest, it was also observed as an agricultural event.
The season of rejoicing
Sukkot is a week of rejoicing beginning on the 15th of the Jewish month of Tishrei, the date of the first full moon after the autumnal equinox. (Usually September or October) Israelites eat their meals in a tabernacle or booth that is covered with boughs, (but with the sky showing through) in remembrance of the desert wanderings.
A week-long holiday
In the Diaspora (outside Israel) the first days are celebrated as full holidays (like a Sabbath). The last day (“the Eight Day of Solemn Assembly”) is also kept as a holiday, followed by Simhat Torah (Rejoicing of the Law).
The first sukkah
The first Sukkot built in the wilderness were probably made from the branches of the acacia (Lotus) tree. This tree grows in the desert wadi’s where floodwaters provide the necessary moisture. The Ark of the Covenant was made of acacia wood.
Once in the Promised Land, the Israelites were able to use the trees of the forest to build Sukkot, as we can read in Nehemiah 8:15.
The Bible mentions live, pine, myrtle, and palm branches, including those of thick trees.
The sukkah reminds of Israel’s journey in the wilderness and symbolizes man’s reliance on Divine protection.
The Four Species
A general part of the Festival are the “four species” (Araba’ah minim) which are held together and waved at different points in the religious services, in accordance with the commandment “Rejoice before the LORD”.
The four species consist of a palm branch, citron, three myrtle twigs, and two willow branches. These species combined are called the “Lulav”.
Sukkot’s symbolic ceremonies
During synagogue services the prayer for rain is central, while the believers “wave” the Four Species.
Because water is vital for life, during the holiday so-called water libation ceremonies take place, while singing songs such as, “You shall draw water with gladness out of the wells of salvation”, taken from Isaiah 12:3.
Simhat Torah marks the conclusion of the annual Torah reading cycle and the beginning of a new cycle and is celebrated on the last day of the Feast.
The rainy season
The prayers for rain commence during Sukkot and continue till Pesach, (Passover) which coincides with the end of the Israel’s rainy season.
The Nations will go up to Jerusalem
Not to wage war, but to celebrate Sukkot with the Jewish people.
“And it shall come to pass that everyone who is left of all the nations which came against Jerusalem shall go up from year to year to worship the King, the LORD of hosts, and to keep the Feast of Tabernacles. And it shall be that whichever of the families of the earth do not come up to Jerusalem to worship the King, the LORD of hosts, on them there will be no rain. If the family of Egypt will not come up and enter in, they shall have no rain; they shall receive the plague with which the LORD strikes the nations who do not come up to keep the Feast of Tabernacles. This shall be the punishment of Egypt and the punishment of all the nations that do not come up to keep the Feast of Tabernacles.” (Zechariah 14:16-19 NKJV)
It’s already happening
The International Christian Embassy yearly sponsors a Christian Celebration which attracts thousands of people from around the world.
People from all nations celebrate this special holiday with the Jewish people in Jerusalem.
“…then the LORD will create above every dwelling place of Mount Zion, and above her assemblies, a cloud and smoke by day and the shining of a flaming fire by night. For over all the glory there will be a covering. And there will be a tabernacle [sukkah] for shade in the daytime from the heat, for a place of refuge, and for a shelter from storm and rain.” (Isaiah 4:5, 6 NKJV)
This article is an excerpt of the soon to be published E-book called:
“A Christian Guide to the Jewish Festival of Sukkot”.
PLEASE ENCOURAGE AUTHOR,
LEAVE COMMENT ON ARTICLE
Read more articles by Petra van der Zande or search for other articles by topic below.
Read more by clicking on a link:
Main Site Articles
Most Read Articles
Highly Acclaimed Challenge Articles.
New Release Christian Books for Free for a Simple Review.
God is Not Against You - He Came on an All Out Rescue Mission to Save You
...in Christ God was reconciling the world to himself, not counting their trespasses against them... 2 Cor 5:19
Therefore, my friends, I want you to know that through
Jesus the forgiveness of sins is proclaimed to you. Acts 13:38
LEARN & TRUST JESUS HERE
The opinions expressed by authors do not necessarily reflect the opinion of FaithWriters.com.
TRUST JESUS TODAY
Free Audio Bible
500 Plus Languages
Faith Comes By Hearing.com | http://www.faithwriters.com/article-details.php?id=67506 |
4.21875 | ||This article needs more medical references for verification or relies too heavily on primary sources. (December 2013)|
||This article's lead section may not adequately summarize key points of its contents. (December 2015)|
- 1 Peripheral auditory system
- 2 Central auditory system
- 3 See also
- 4 References
- 5 Further reading
- 6 External links
Peripheral auditory system
The auditory periphery, starting with the ear, is the first stage of the transduction of sound in a hearing organism. While not part of the nervous system, its components feed directly into the nervous system, performing mechanoeletrical transduction of sound pressure waves into neural action potentials.
The folds of cartilage surrounding the ear canal are called the pinna. Sound waves are reflected and attenuated when they hit the pinna, and these changes provide additional information that will help the brain determine the direction from which the sounds came.
The sound waves enter the auditory canal, a deceptively simple tube. The ear canal amplifies sounds that are between 3 and 12 kHz. At the far end of the ear canal is the tympanic membrane, which marks the beginning of the middle ear.
Sound waves travel through the ear canal and hit the tympanic membrane, or eardrum. This wave information travels across the air-filled middle ear cavity via a series of delicate bones: the malleus (hammer), incus (anvil) and stapes (stirrup). These ossicles act as a lever, converting the lower-pressure eardrum sound vibrations into higher-pressure sound vibrations at another, smaller membrane called the oval (or elliptical) window. The manubrium (handle) of the malleus articulates with the tympanic membrane, while the footplate of the stapes articulates with the oval window. Higher pressure is necessary at the oval window than at the typanic membrane because the inner ear beyond the oval window contains liquid rather than air. The stapedius reflex of the middle ear muscles helps protect the inner ear from damage by reducing the transmission of sound energy when the stapedius muscle is activated in response to sound. The middle ear still contains the sound information in wave form; it is converted to nerve impulses in the cochlea.
Diagrammatic longitudinal section of the cochlea. Scala media is labeled as ductus cochlearis at right.
The inner ear consists of the cochlea and several non-auditory structures. The cochlea has three fluid-filled sections, and supports a fluid wave driven by pressure across the basilar membrane separating two of the sections. Strikingly, one section, called the cochlear duct or scala media, contains endolymph, a fluid similar in composition to the intracellular fluid found inside cells. The organ of Corti is located in this duct on the basilar membrane, and transforms mechanical waves to electric signals in neurons. The other two sections are known as the scala tympani and the scala vestibuli; these are located within the bony labyrinth, which is filled with fluid called perilymph, similar in composition to cerebrospinal fluid. The chemical difference between the fluids endolymph and perilymph fluids is important for the function of the inner ear due to electrical potential differences between potassium and calcium ions.
The plan view of the human cochlea (typical of all mammalian and most vertebrates) shows where specific frequencies occur along its length. The frequency is an approximately exponential function of the length of the cochlea within the Organ of Corti. In some species, such as bats and dolphins, the relationship is expanded in specific areas to support their active sonar capability.
Organ of Corti
The organ of Corti forms a ribbon of sensory epithelium which runs lengthwise down the cochlea's entire scala media. Its hair cells transform the fluid waves into nerve signals. The journey of countless nerves begins with this first step; from here, further processing leads to a panoply of auditory reactions and sensations.
Hair cells are columnar cells, each with a bundle of 100-200 specialized cilia at the top, for which they are named. There are two types of hair cells. Inner hair cells are the mechanoreceptors for hearing: they transduce the vibration of sound into electrical activity in nerve fibers, which is transmitted to the brain. Outer hair cells are a motor structure. Sound energy causes changes in the shape of these cells, which serves to amplify sound vibrations in a frequency specific manner. Lightly resting atop the longest cilia of the inner hair cells is the tectorial membrane, which moves back and forth with each cycle of sound, tilting the cilia, which is what elicits the hair cells' electrical responses.
Inner hair cells, like the photoreceptor cells of the eye, show a graded response, instead of the spikes typical of other neurons. These graded potentials are not bound by the “all or none” properties of an action potential.
At this point, one may ask how such a wiggle of a hair bundle triggers a difference in membrane potential. The current model is that cilia are attached to one another by “tip links”, structures which link the tips of one cilium to another. Stretching and compressing, the tip links may open an ion channel and produce the receptor potential in the hair cell. Recently it has been shown that Cadherin-23 CDH23 and Protocadherin-15 PCDH15 are the adhesion molecules associated with these tip links. It is thought that a calcium driven motor causes a shortening of these links to regenerate tensions. This regeneration of tension allows for apprehension of prolonged auditory stimulation.
Afferent neurons innervate cochlear inner hair cells, at synapses where the neurotransmitter glutamate communicates signals from the hair cells to the dendrites of the primary auditory neurons.
There are far fewer inner hair cells in the cochlea than afferent nerve fibers – many auditory nerve fibers innervate each hair cell. The neural dendrites belong to neurons of the auditory nerve, which in turn joins the vestibular nerve to form the vestibulocochlear nerve, or cranial nerve number VIII. The region of the basilar membrane supplying the inputs to a particular afferent nerve fibre can be considered to be its receptive field.
Efferent projections from the brain to the cochlea also play a role in the perception of sound, although this is not well understood. Efferent synapses occur on outer hair cells and on afferent (towards the brain) dendrites under inner hair cells
Central auditory system
This sound information, now re-encoded, travels down the vestibulocochlear nerve, through intermediate stations such as the cochlear nuclei and superior olivary complex of the brainstem and the inferior colliculus of the midbrain, being further processed at each waypoint. The information eventually reaches the thalamus, and from there it is relayed to the cortex. In the human brain, the primary auditory cortex is located in the temporal lobe.
Associated anatomical structures include:
The cochlear nucleus is the first site of the neuronal processing of the newly converted “digital” data from the inner ear (see also binaural fusion). In mammals, this region is anatomically and physiologically split into two regions, the dorsal cochlear nucleus (DCN), and ventral cochlear nucleus (VCN).
The Trapezoid body is a bundle of decussating fibers in the ventral pons that carry information used for binaural computations in the brainstem.
Superior olivary complex
The superior olivary complex is located in the pons, and receives projections predominantly from the ventral cochlear nucleus, although the dorsal cochlear nucleus projects there as well, via the ventral acoustic stria. Within the superior olivary complex lies the lateral superior olive (LSO) and the medial superior olive (MSO). The former is important in detecting interaural level differences while the latter is important in distinguishing interaural time difference.
The lateral lemniscus is a tract of axons in the brainstem that carries information about sound from the cochlear nucleus to various brainstem nuclei and ultimately the contralateral inferior colliculus of the midbrain.
The inferior colliculus (IC) are located just below the visual processing centers known as the superior colliculi. The central nucleus of the IC is a nearly obligatory relay in the ascending auditory system, and most likely acts to integrate information (specifically regarding sound source localization from the superior olivary complex and dorsal cochlear nucleus) before sending it to the thalamus and cortex.
Medial geniculate nucleus
The medial geniculate nucleus is part of the thalamic relay system.
Primary auditory cortex
Perception of sound is associated with the left posterior superior temporal gyrus (STG). The superior temporal gyrus contains several important structures of the brain, including Brodmann areas 41 and 42, marking the location of the primary auditory cortex, the cortical region responsible for the sensation of basic characteristics of sound such as pitch and rhythm. We know from work in nonhuman primates that primary auditory cortex can probably itself be divided further into functionally differentiable subregions. The neurons of the primary auditory cortex can be considered to have receptive fields covering a range of auditory frequencies and have selective responses to harmonic pitches. Neurons integrating information from the two ears have receptive fields covering a particular region of auditory space.
Primary auditory cortex is surrounded by secondary auditory cortex, and interconnects with it. These secondary areas interconnect with further processing areas in the superior temporal gyrus, in the dorsal bank of the superior temporal sulcus, and in the frontal lobe. In humans, connections of these regions with the middle temporal gyrus are probably important for speech perception. The frontotemporal system underlying auditory perception allows us to distinguish sounds as speech, music, or noise.
- Auditory brainstem response and ABR audiometry test for newborn hearing
- Auditory processing disorder
- Cognitive neuroscience of music
- Endaural phenomena
- Gammatone filter – a simple linear model of peripheral auditory filtering
- Noise health effects
- Selective auditory attention
- Lelli, A.; Kazmierczak, P.; Kawashima, Y.; Muller, U.; Holt, J. R. (2010). "Development and Regeneration of Sensory Transduction in Auditory Hair Cells Requires Functional Interaction Between Cadherin-23 and Protocadherin-15". Journal of Neuroscience 30 (34): 11259–11269. doi:10.1523/JNEUROSCI.1949-10.2010. PMC 2949085. PMID 20739546.
- Peng, AW.; Salles, FT.; Pan, B.; Ricci, AJ. (2011). "Integrating the biophysical and molecular mechanisms of auditory hair cell mechanotransduction.". Nature Communications 2: 523–. doi:10.1038/ncomms1533. PMC 3418221. PMID 22045002.
- Meddean – CN VIII. Vestibulocochlear Nerve
- Moore JK (November 2000). <403::AID-JEMT8>3.0.CO;2-Q "Organization of the human superior olivary complex". Microsc. Res. Tech. 51 (4): 403–12. doi:10.1002/1097-0029(20001115)51:4<403::AID-JEMT8>3.0.CO;2-Q. PMID 11071722.
- Oliver DL (November 2000). <355::AID-JEMT5>3.0.CO;2-J "Ascending efferent projections of the superior olivary complex". Microsc. Res. Tech. 51 (4): 355–63. doi:10.1002/1097-0029(20001115)51:4<355::AID-JEMT5>3.0.CO;2-J. PMID 11071719.
- Demanez JP, Demanez L (2003). "Anatomophysiology of the central auditory nervous system: basic concepts". Acta Otorhinolaryngol Belg 57 (4): 227–36. PMID 14714940.
- Pandya DN (1995). "Anatomy of the auditory cortex". Rev. Neurol. (Paris) 151 (8-9): 486–94. PMID 8578069.
- Kaas JH, Hackett TA (1998). "Subdivisions of auditory cortex and levels of processing in primates". Audiol. Neurootol. 3 (2-3): 73–85. doi:10.1159/000013783. PMID 9575378.
- Kaas JH, Hackett TA, Tramo MJ (April 1999). "Auditory processing in primate cerebral cortex". Curr. Opin. Neurobiol. 9 (2): 164–70. doi:10.1016/S0959-4388(99)80022-1. PMID 10322185.
- Kaas JH, Hackett TA (October 2000). "Subdivisions of auditory cortex and processing streams in primates". Proc. Natl. Acad. Sci. U.S.A. 97 (22): 11793–9. doi:10.1073/pnas.97.22.11793. PMC 34351. PMID 11050211.
- Hackett TA, Preuss TM, Kaas JH (December 2001). "Architectonic identification of the core region in auditory cortex of macaques, chimpanzees, and humans". J. Comp. Neurol. 441 (3): 197–222. doi:10.1002/cne.1407. PMID 11745645.
- Scott SK, Johnsrude IS (February 2003). "The neuroanatomical and functional organization of speech perception". Trends Neurosci. 26 (2): 100–7. doi:10.1016/S0166-2236(02)00037-1. PMID 12536133.
- Tian B, Reser D, Durham A, Kustov A, Rauschecker JP (April 2001). "Functional specialization in rhesus monkey auditory cortex". Science 292 (5515): 290–3. doi:10.1126/science.1058911. PMID 11303104.
- Wang X (2013). "The harmonic organization of auditory cortex". Front Syst Neurosci 7: 114. doi:10.3389/fnsys.2013.00114. PMC 3865599. PMID 24381544.
- Kandel, Eric R. (2012). Principles of Neural Science. New York: McGraw-Hill. ISBN 978-0-07-139011-8. OCLC 795553723.
- Promenade 'round the cochlea
- Auditory system – Washington University Neuroscience Tutorial
- Lincoln Gray. "Chapter 13: Auditory System: Pathways and Reflexes". Neuroscience Online, the Open-Access Neuroscience Electronic Textbook. The University of Texas Health Science Center at Houston (UTHealth). Retrieved 27 April 2014. | https://en.wikipedia.org/wiki/Auditory_system |
4.25 | ELA: KINDERGARTEN - GRADE 12
LITERACY: GRADES 6 - 12
Write informative/explanatory texts to examine and convey complex ideas, concepts, and information clearly and accurately through the effective selection, organization, and analysis of content.
- Introduce a topic; organize complex ideas, concepts, and information so that each new element builds on that which precedes it to create a unified whole; include formatting (e.g., headings), graphics (e.g., figures, tables), and multimedia when useful to aiding comprehension.
- Develop the topic thoroughly by selecting the most significant and relevant facts, extended definitions, concrete details, quotations, or other information and examples appropriate to the audience’s knowledge of the topic.
- Use appropriate and varied transitions and syntax to link the major sections of the text, create cohesion, and clarify the relationships among complex ideas and concepts.
- Use precise language, domain-specific vocabulary, and techniques such as metaphor, simile, and analogy to manage the complexity of the topic.
- Establish and maintain a formal style and objective tone while attending to the norms and conventions of the discipline in which they are writing.
- Provide a concluding statement or section that follows from and supports the information or explanation presented (e.g., articulating implications or the significance of the topic).
This standard requires writing that is informational in nature. Students will use facts, statistics, examples, and anecdotes to explain a concept or process to their readers. Presentation of information in a consistent and clear fashion is the expected outcome here, and the bullet points get specific about exactly what students should be doing in order to meet these expectations. As always, selecting the most relevant research, using transitions to effectively structure the information, and using an appropriate style and tone are key points for success. Keep reading for an example of an informational writing assignment you might use with your students.
Teach With Shmoop
Tag! You're it.
The links in this section will take you straight to the standard-aligned assignments tagged in Shmoop's teaching guides.
That's right, we've done the work. You just do the clickin...
Teaching Guides Using this Standard
- 1984 Teacher Pass
- A Raisin in the Sun Teacher Pass
- A Rose For Emily Teacher Pass
- A View from the Bridge Teacher Pass
- Adventures of Huckleberry Finn Teacher Pass
- Animal Farm Teacher Pass
- Antigone Teacher Pass
- Beowulf Teacher Pass
- Brave New World Teacher Pass
- Death of a Salesman Teacher Pass
- Fahrenheit 451 Teacher Pass
- Fences Teacher Pass
- Frankenstein Teacher Pass
- Grapes Of Wrath Teacher Pass
- Great Expectations Teacher Pass
- Hamlet Teacher Pass
- Heart of Darkness Teacher Pass
- Julius Caesar Teacher Pass
- Moby Dick Teacher Pass
- Narrative of Frederick Douglass Teacher Pass
- Oedipus the King Teacher Pass
- Of Mice and Men Teacher Pass
- Othello Teacher Pass
- Romeo and Juliet Teacher Pass
- Sula Teacher Pass
- The Aeneid Teacher Pass
- The As I Lay Dying Teacher Pass
- The Bluest Eye Teacher Pass
- The Canterbury Tales General Prologue Teacher Pass
- The Canterbury Tales: The Miller's Tale Teacher Pass
- The Cask of Amontillado Teacher Pass
- The Crucible Teacher Pass
- The Great Gatsby Teacher Pass
- The House on Mango Street Teacher Pass
- The Iliad Teacher Pass
- The Lottery Teacher Pass
- The Metamorphosis Teacher Pass
- The Old Man and the Sea Teacher Pass
- The Scarlet Letter Teacher Pass
- Their Eyes Were Watching God Teacher Pass
- Things Fall Apart Teacher Pass
- To Kill a Mockingbird Teacher Pass
- Twilight Teacher Pass
- Wide Sargasso Sea Teacher Pass
- Wuthering Heights Teacher Pass
After reading The Crucible, by Arthur Miller, you’ve been asked by your teacher to write an informational essay in which you compare and contrast the dealings of the play with the McCarthy Hearings of the 1950s. In order to write the essay, you’ll need to draw upon what you already know about McCarthyism. Next, you’ll brainstorm possible criteria. These are points that you will use to compare and contrast the two situations. Your essay will explain how literature often mirrors real events.
Work with your elbow partner to create a list of possible sub-topics. Let’s say, the time period and place, the historical context, public policies or beliefs, the principles involved, the nature of evidence, the types of hearings, the reason for suspicions, targeted groups, public reaction, and consequences for those found guilty. Whew! We’re a bit out of breath! It’s a veritable cauldron of ideas!
Satisfied with these measures, you will complete research online, in history textbooks, and informational texts devoted specifically to the two topics. Essays by enchanters in the field, historians, and professors in the appropriate subject areas (English, history, sociology, etc.) will provide many types of information you’ll use in your report.
Your sources will yield a charming variety of facts, direct quotations, statistics, details, and definitions. These will help explain how The Crucible and the McCarthy Hearings are alike and different. These facts will help support your thesis that there are many more similarities than differences. In addition to written factual material, you’ve been asked to include multi-media. More research on websites such as History.com and YouTube finds many videos that can be embedded into your essay. Clips from the film version of The Crucible would also add interest to your paper, and don’t forget about photographs of those on trial at the McCarthy Hearings.
The shaping of your paper will be important. Be crafty. You could choose to write all about one situation, say the play first, followed by the McCarthy Hearings. Or, you could discuss each element one at a time, but addressing both situations. Looking at your notes, you decide to use the criteria strategy. Transition words, such as similar to, also, unlike, similarly, in the same way, likewise, again, compared to, in contrast, in like manner, contrasted with, on the contrary, however, although, yet, even though, etc., will be needed to compare and contrast the two situations.
As you write your paper, use headings to indicate the specific criteria that each section covers. These headings act as a map for your reader. Make sure that the arrangement of criteria is appropriate as well. For example, the time period, the setting, and the principles would certainly come first in your essay. Criteria, such as findings and punishments, would be reserved until the end of your paper.
You’ll have your readers HANGING on your every word. Since this is an academic paper, be sure to avoid slang. That’s right… no hocus-pocus. Instead, use mature vocabulary fitting to your classmates. When using words such as subversion, censure, allegations, filibustering, hearing, and McCarthyism, wave that magic wand over your paper and describe them. Your writing will be more mature with these types of words.
Apply your knowledge about syntax, and use a variety of sentence types, too. Short sentences mixed in with longer ones can be very effective. We mean that. You might also try to use simile and metaphor to liven up your text. Finally, the use of analogy is perfect for this assignment. In fact, you’ve set out to prove The Crucible is an analogy (a parallel comparison) for the McCarthy hearings.
At the end of your paper, be sure to draw a conclusion about the facts you have provided. Answer the original question. Does The Crucible really mirror the McCarthy Hearings as the playwright intended? While you’re using facts to determine this answer, it’s your ANALYSIS of those facts that will cast a spell on the audience.
Quiz QuestionsHere's an example of a quiz that could be used to test this standard.
Match the description to the word.
- ACT English 2.4 Punctuation
- ACT English 3.1 Punctuation
- ACT English 3.3 Punctuation
- SAT Reading 1.1 Sentence Completion
- SAT Reading 1.2 Sentence Completion
- SAT Reading 1.3 Sentence Completion
- SAT Reading 1.4 Sentence Completion
- SAT Reading 1.5 Sentence Completion
- SAT Reading 1.6 Sentence Completion
- SAT Reading 1.7 Sentence Completion
- SAT Reading 1.8 Sentence Completion
- SAT Reading 1.9 Sentence Completion
- SAT Reading 2.1 Sentence Completion
- SAT Reading 2.2 Sentence Completion
- SAT Reading 2.3 Sentence Completion
- SAT Reading 2.4 Sentence Completion
- SAT Reading 2.5 Sentence Completion
- SAT Reading 2.6 Sentence Completion
- SAT Reading 2.7 Sentence Completion
- SAT Reading 2.8 Sentence Completion
- SAT Reading 2.9 Sentence Completion
- SAT Reading 3.1 Sentence Completion
- SAT Reading 3.2 Sentence Completion
- SAT Reading 3.3 Sentence Completion
- SAT Reading 3.4 Sentence Completion
- SAT Reading 3.5 Sentence Completion
- SAT Reading 3.6 Sentence Completion
- SAT Reading 3.7 Sentence Completion
- SAT Reading 3.8 Sentence Completion
- SAT Reading 3.9 Sentence Completion
- SAT Reading 4.1 Sentence Completion
- SAT Reading 4.2 Sentence Completion
- SAT Reading 4.3 Sentence Completion
- SAT Reading 4.4 Sentence Completion
- SAT Reading 4.5 Sentence Completion | http://www.shmoop.com/common-core-standards/ccss-ela-literacy-w-11-12-2.html |
4.3125 | Antarctic ice sheet
It covers about 98% of the Antarctic continent and is the largest single mass of ice on Earth. It covers an area of almost 14 million square kilometres (5.4 million square miles) and contains 26.5 million cubic kilometres (6,400,000 cubic miles) of ice. That is, approximately 61 percent of all fresh water on the Earth is held in the Antarctic ice sheet, an amount equivalent to about 58 m of sea-level rise. In East Antarctica, the ice sheet rests on a major land mass, but in West Antarctica the bed can extend to more than 2,500 m below sea level. Much of the land in this area would be seabed if the ice sheet were not there.
In contrast to the melting of the Arctic sea ice, sea ice around Antarctica was expanding as of 2013[update]. The reasons for this are not fully understood, but suggestions include the climatic effects on ocean and atmospheric circulation of the ozone hole, and/or cooler ocean surface temperatures as the warming deep waters melt the ice shelves.
The icing of Antarctica began with ice-rafting from middle Eocene times about 45.5 million years ago and escalated inland widely during the Eocene–Oligocene extinction event about 34 million years ago. CO2 levels were then about 760 ppm and had been decreasing from earlier levels in the thousands of ppm. Carbon dioxide decrease, with a tipping point of 600 ppm, was the primary agent forcing Antarctic glaciation. The glaciation was favored by an interval when the Earth's orbit favored cool summers but oxygen isotope ratio cycle marker changes were too large to be explained by Antarctic ice-sheet growth alone indicating an ice age of some size. The opening of the Drake Passage may have played a role as well though models of the changes suggest declining CO2 levels to have been more important.
Changes since the late twentieth century
According to a 2009 study, the continent-wide average surface temperature trend of Antarctica is positive and significant at >0.05 °C/decade since 1957. West Antarctica has warmed by more than 0.1 °C/decade in the last 50 years, and this warming is strongest in winter and spring. Although this is partly offset by fall cooling in East Antarctica, this effect is restricted to the 1980s and 1990s.
Sea ice and land ice
Ice enters the sheet through precipitation as snow. This snow is then compacted to form glacier ice which moves under gravity towards the coast. Most of it is carried to the coast by fast moving ice streams. The ice then passes into the ocean, often forming vast floating ice shelves. These shelves then melt or calve off to give icebergs that eventually melt.
If the transfer of the ice from the land to the sea is balanced by snow falling back on the land then there will be no net contribution to global sea levels. A 2002 analysis of NASA satellite data from 1979–1999 showed that while overall the land ice is decreasing, areas of Antarctica where sea ice was increasing outnumbered areas of decreasing sea ice roughly 2:1. The general trend shows that a warming climate in the southern hemisphere would transport more moisture to Antarctica, causing the interior ice sheets to grow, while calving events along the coast will increase, causing these areas to shrink. A 2006 paper derived from satellite data, measures changes in the gravity of the ice mass, suggests that the total amount of ice in Antarctica has begun decreasing in the past few years. A 2008 study compared the ice leaving the ice sheet, by measuring the ice velocity and thickness along the coast, to the amount of snow accumulation over the continent. This found that the East Antarctic Ice Sheet was in balance but the West Antarctic Ice Sheet was losing mass. This was largely due to acceleration of ice streams such as Pine Island Glacier. These results agree closely with the gravity changes. An estimate published in November 2012 and based on the GRACE data as well as on an improved glacial isostatic adjustment model discussed systematic uncertainty in the estimates, and by studying 26 separate regions, estimated an average yearly mass loss of 69 ± 18 Gt/y from 2002 to 2010. The mass loss was geographically uneven, mainly occurring along the Amundsen Sea coast, while the West Antarctic Ice Sheet mass was roughly constant and the East Antarctic Ice Sheet gained in mass.
Antarctic sea ice anomalies have roughly followed the pattern of warming, with the greatest declines occurring off the coast of West Antarctica. East Antarctica sea ice has been increasing since 1978, though not at a statistically significant rate. The atmospheric warming has been directly linked to the mass losses in West Antarctica of the first decade of the twenty-first century. This mass loss is more likely to be due to increased melting of the ice shelves because of changes in ocean circulation patterns (which themselves may be linked to atmospheric circulation changes that may also explain the warming trends in West Antarctica). Melting of the ice shelves in turn causes the ice streams to speed up. The melting and disappearance of the floating ice shelves will only have a small effect on sea level, which is due to salinity differences. The most important consequence of their increased melting is the speed up of the ice streams on land which are buttressed by these ice shelves.
- NASA (2007). "Two Decades of Temperature Change in Antarctica". Earth Observatory Newsroom. Archived from the original on 20 September 2008. Retrieved 2008-08-14. NASA image by Robert Simmon, based on data from Joey Comiso, GSFC.
- Amos, Jonathan (2013-03-08). "BBC News - Antarctic ice volume measured". Bbc.co.uk. Retrieved 2014-01-28.
- P. Fretwell; H. D. Pritchard; et al. (31 July 2012). "Bedmap2: improved ice bed, surface and thickness datasets for Antarctica" (PDF). The Cryosphere. Retrieved 1 December 2015.
Using data largely collected during the 1970s, Drewry et al. (1992), estimated the potential sea-level contribution of the Antarctic ice sheets to be in the range of 60-72 m; for Bedmap1 this value was 57 m (Lythe et al., 2001), and for Bedmap2 it is 58 m.
- Turner, John; Overland, Jim (2009). "Contrasting climate change in the two polar regions". Polar Research 28 (2). Retrieved 26 August 2013.
- Bintanja, R.; van Oldenborgh, G. J.; Drijfhout, S. S.; Wouters, B.; Katsman, C. A. (31 March 2013). "Important role for ocean warming and increased ice-shelf melt in Antarctic sea-ice expansion". Nature Geoscience 6 (5): 376–379. Bibcode:2013NatGe...6..376B. doi:10.1038/ngeo1767.
- Sedimentological evidence for the formation of an East Antarctic ice sheet in Eocene/Oligocene time Palaeogeography, palaeoclimatology, & palaeoecology ISSN 0031-0182, 1992, vol. 93, no1-2, pp. 85–112 (3 p.)
- New CO2 data helps unlock the secrets of Antarctic formation September 13th, 2009
- "Drop in carbon dioxide levels led to polar ice sheet, study finds". ScienceDaily. doi:10.1126/science.1203909. Retrieved 2014-01-28.
- Rapid stepwise onset of Antarctic glaciation and deeper calcite compensation in the Pacific Ocean Nature 433, 53–57 (6 January 2005) | doi:10.1038/nature03135; Received 1 September 2004; Accepted 25 October 2004
- Eocene-Oligocene transition in the Southern Ocean: History of water mass circulation and biological productivity Geology February 1996 v. 24 no. 2 p. 163-166 doi:10.1130/0091-7613(1996)024
- Rapid Cenozoic glaciation of Antarctica induced by declining atmospheric CO2 Nature 421, 245–249 (16 January 2003) | doi:10.1038; Received 25 July 2002; Accepted 12 November 2002
- Steig, Eric (2009-01-21). "Temperature in West Antarctica over the last 50 and 200 years" (PDF). Retrieved 2009-01-22.
- Steig, Eric. "Biography". Archived from the original on 29 December 2008. Retrieved 2009-01-22.[dead link]
- Steig, E. J.; Schneider, D. P.; Rutherford, S. D.; Mann, M. E.; Comiso, J. C.; Shindell, D. T. (2009). "Warming of the Antarctic ice-sheet surface since the 1957 International Geophysical Year". Nature 457 (7228): 459–462. Bibcode:2009Natur.457..459S. doi:10.1038/nature07669. PMID 19158794.
- Ingham, Richard (2009-01-22). "Global warming hitting all of Antarctica". The Sydney Morning Herald. Retrieved 2009-01-22.
- Ramanujan, Krishna (2002-08-22). "Satellites Show Overall Increases in Antarctic Sea Ice Cover". Goddard Space Flight Center. Archived from the original on 24 May 2007. Retrieved 2007-04-21.
- Velicogna, Isabella; Wahr, John; Scott, Jim (2006-03-02). "Antarctic ice sheet losing mass, says University of Colorado study". University of Colorado at Boulder. Archived from the original on 9 April 2007. Retrieved 2007-04-21.
- Rignot, E.; Bamber, J. L.; Van Den Broeke, M. R.; Davis, C.; Li, Y.; Van De Berg, W. J.; Van Meijgaard, E. (2008). "Recent Antarctic ice mass loss from radar interferometry and regional climate modelling". Nature Geoscience 1 (2): 106. Bibcode:2008NatGe...1..106R. doi:10.1038/ngeo102.
- Rignot, E. (2008). "Changes in West Antarctic ice stream dynamics observed with ALOS PALSAR data". Geophysical Research Letters 35 (12): L12505. Bibcode:2008GeoRL..3512505R. doi:10.1029/2008GL033365.
- King, M. A.; Bingham, R. J.; Moore, P.; Whitehouse, P. L.; Bentley, M. J.; Milne, G. A. (2012). "Lower satellite-gravimetry estimates of Antarctic sea-level contribution". Nature 491 (7425): 586–589. Bibcode:2012Natur.491..586K. doi:10.1038/nature11621. PMID 23086145.
- Payne, A. J.; Vieli, A.; Shepherd, A. P.; Wingham, D. J.; Rignot, E. (2004). "Recent dramatic thinning of largest West Antarctic ice stream triggered by oceans". Geophysical Research Letters 31 (23): L23401. Bibcode:2004GeoRL..3123401P. doi:10.1029/2004GL021284.
- Peter Noerdlinger, PHYSORG.COM "Melting of Floating Ice Will Raise Sea Level"
- Noerdlinger, P.D.; Brower, K.R. (July 2007). "The melting of floating ice raises the ocean level". Geophysical Journal International 170 (1): 145–150. Bibcode:2007GeoJI.170..145N. doi:10.1111/j.1365-246X.2007.03472.x.
- Jenkins, A.; Holland, D. (August 2007). "Melting of floating ice and sea level rise". Geophysical Research Letters 34 (16): L16609. Bibcode:2007GeoRL..3416609J. doi:10.1029/2007GL030784. | https://en.wikipedia.org/wiki/Antarctic_ice_sheet |
4.21875 | Since the early part of the 20th century, scientists have been going to sea on ships equipped with long, hollow pipes called corers. These corers are used to collect seafloor sediments, plunging into the ocean bottom and capturing long stratified plugs of the seabed. These sediments contain clues to a myriad of past conditions and events in the Earth’s oceans and climate.
Over decades, technology and methods for handling coring systems have become safer, and core samples have become longer, reaching 20 to 25 meters (60 to 80 feet) in length. This longer reach provides access to deeper, older sediments, and thus a look further back into Earth history.
In the 1970s, Charley Hollister at the Woods Hole Oceanographic Institution (WHOI) recovered an historic core from the Northeast Pacific. It contained one of the longest continuous records of ocean basin history: 65 million years—back to times when dinosaurs still roamed the planet. As scientists still do today, Hollister and colleagues studied tiny fossilized shells and fish teeth found in the core. Careful analysis of these remnants can reveal the temperature, water chemistry, and other characteristics of ancient oceans.
The ability to collect these long cores has neither been easy nor routine for U.S. oceanographers. However, in 2007, scientists will have an opportunity to explore Earth’s history with a newly developed coring system that will extract 45-meter (150-foot) cores from the seafloor—some of the world’s longest.
This massive instrument, weighing more than 30,000 pounds when assembled, is so large that the WHOI-operated 85-meter (279-foot) research vessel Knorr had to be specially modified to accommodate the handling of the system’s extreme length and weight (view slideshow). The new device is nearly twice as long and five times as heavy as existing coring systems in the U.S. oceanographic fleet.
Private donations to WHOI, including the Cecil H. and Ida M. Green Technology Innovations Awards Program and the Grayce B. Kerr Fund, provided funds for feasibility studies and conceptual design. The National Science Foundation provided major funding for the construction of the system. To date, more than $5 million has been invested in engineering, ship modification, and equipment construction.
Historically, coring systems have encountered problems recovering samples of the soft sediment on the ocean bottom because the large weight of the equipment stretched the long cables needed to lower it to the seafloor. The elongated cabled recoiled elastically, jerking the corer up after it penetrated the seabed and distorting the stratified layers that provide a chronology of past events in the ocean.
But an extremely strong, custom-made synthetic rope was designed for the WHOI long-coring system. Tests showed that even with a 35,000-pound load hanging in 15,000 feet of water, the rope would stretch less than 10 feet. This promises little recoil and high-quality cores with accurate records of past ocean conditions.
By the fall of 2007, the complex array of components of the coring system (view Flash) will be installed onboard Knorr, and sea trials will begin. The first cruise will be purely an engineering evaluation of all aspects of the system. Once in service though, the corer will be deployed all over the world oceans (view animation Quicktime or Media Player) to study climate history, sea-level change, and a variety of other scientific puzzles whose secrets are buried in sediments at the bottom of the sea.
The research vessel Knorr underwent the ship equivalent of a total body makeover to support the extraordinary loads that will be generated during operations of the long corer.
Watch and hear how the long corer system works to recover sediments on the seafloor.
A variety of one-of-a-kind components combines to maintain control of the 150-foot corer. | https://www.whoi.edu/oceanus/viewArticle.do?id=26407 |
4.09375 | Graphite Expert Review
Learning ScoresRead how we rate and review all products on Graphite.
Key Standards Supported
Counting And Cardinality
|K.CC: Compare Numbers.|
|K.CC.7||Compare two numbers between 1 and 10 presented as written numerals.|
|Know Number Names And The Count Sequence.|
|K.CC.3||Write numbers from 0 to 20. Represent a number of objects with a written numeral 0-20 (with 0 representing a count of no objects).|
Measurement And Data
|K.MD: Describe And Compare Measurable Attributes.|
|K.MD.2||Directly compare two objects with a measurable attribute in common, to see which object has “more of”/“less of” the attribute, and describe the difference. For example, directly compare the heights of two children and describe one child as taller/shorter.|
See how teachers are using Intro to Math, by Montessorium
Topics from our community related to Critical Thinking, Math, and Health & Wellness.
#WithMathICan be a better social studies teacher and studentPosted February 6, 2016 at 10:58 am
#WithMathICan Develop a Growth MindsetPosted February 3, 2016 at 08:50 pm
What tools and strategies do you use to help students construct and critique mathematical arguments?Posted January 27, 2016 at 05:57 pm | https://www.graphite.org/app/intro-to-math-by-montessorium |
4.28125 | 2 Answers | Add Yours
The asthenosphere is located between 623 and 124 miles below Earth's surface. It is under the lithosphere. This layer is responsible for plate tectonics, due to its high temperatures and the fact that it is plastic and allows seismic wasves to pass through. The lithosphere is the crust and the upper most part of the mantle. It is at the uppermost part of the asthenosphere where the plates of the Earth's crust move, due to the heat and pressure in this layer, the rocks above this layer behave elastically and eventually can break, which causes crustal movement, fault lines, etc. There are convection currents that cause heat to flow from the Earth's interior, towards the surface. This energy is responsible for the movement of the crustal plates. You can envision a bowl of soup with croutons floating on top of it to represent successively, the asthenosphere and the lithosphere.
The lithosphere is the uppermost solid part of the mantle. It is not as solid as the outer layer, the crust, but it is not as liquid as the next layer of the mantle, the asthenosphere, which lies directly beneath it. Here, the rocks have a quality called plasticity, where they tend to flow more like liquids. After that, you have the rest of the mantle, which is of a true liquid-like melted rock nature.
We’ve answered 302,739 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/compare-contrast-asthenosphere-with-lithosphere-302072 |
4.21875 | High School: Geometry
Similarity, Right Triangles, and Trigonometry HSG-SRT.D.11
11. Understand and apply the Law of Sines and the Law of Cosines to find unknown measurements in right and non-right triangles (e.g., surveying problems, resultant forces).
Students have already proven and used the Laws of Sines and Cosines, but now they're expected to extend their use of the laws and apply them to applications or word problems.
How will students master this standard? The same way they get to Carnegie Hall: practice, practice, practice.
It is in fact through solving many, many application problems using the Laws of Sines and Cosines that students will build their understanding of and confidence in applying these laws to solve problems. And as they become more confident and better understand what they're doing, they will be more prepared to demonstrate mastery of the standard. Sort of a feedback loop of mathematical brilliance, if you will.
Practice, practice, practice will also expose students to the many kinds of problems that can be solved through use of these laws, which will enable them to see the many uses that the laws have in different disciplines: not only math, but also engineering, surveying, orienteering, architecture, and so on.
- Parallelogram: Find the Diagonal - Math Shack
- Law of Sines Given AAS - Math Shack
- Law of Sines Given SSA - Math Shack
- Law of Cosines Given SSS - Math Shack
- Law of Cosines Given SSA - Math Shack
- Law of Sines Given SSA Ambiguous Case - Math Shack
- Using the Law of Sines - Math Shack | http://www.shmoop.com/common-core-standards/ccss-hs-g-srt-11.html |
4.03125 | Last week I talked about a paper in press in the journal Icarus on the Formation of Io's Mountains. More specifically, the paper described the sorts of stresses that could result in the formation of thrust faults that are the leading theory for how individual chunks of Io's crust are pushed upward to form some of the tallest mountains in the solar system. This week I thought I would talk about how the other major landform type on Io is formed: paterae.
Much of the following discussion is developed from two sources: Keszthelyi et al. 2004, "A post-Galileo view of Io's interior," and Ashley Davies's 2007 book, "Volcanism on Io: A Comparison with Earth."
The term patera is the name given by the International Astronomical Union for "irregular craters, or complex ones with scalloped edges." Patera (plural=paterae) comes from the latin for "saucer." On most planetary bodies, the term patera is generally used for volcanic pits, like Garland Patera on Venus, Leviathan Patera on Triton, or Uranius Patera on Mars. On Io, this is also the convention used. Radebaugh et al. 2001 counted and measured more than 400 paterae across Io's surface from Voyager and Galileo imagery and found that they average 41 km in diameter, larger than similar volcanic depressions on Earth, Venus, and Mars. While these features bare a resemblance to terrestrial calderas, there are issues with their formation being similar. Calderas on Earth (and other planets like Mars) are formed when freshly-emptied magma chamber collapses after an eruption and the ground above fills the void, leaving a depression on the surface. In the case of mafic-rich volcanoes, like Kilauea, a caldera forms from subsidence as a result of flows mostly outside the volcanic pit. For Io, Keszthelyi et al. point out that there is no evidence for major voluminous eruptions with flows outside of patera with enough lava to explain the scale of many of the patera on Io. Plus, the composition of magma on Io is not likely to be silicic like those found in eruptions that form large, Yellowstone-type terrestrial calderas. Tectonic extension may result in the formation of some of Io's patera, like Hi'iaka and Monan, but this is not thought to be the dominant process.
Keszthelyi et al. 2004 put forward another hypothesis for the formation of patera. This model is the result of the authors' examination of the post-Galileo view of Io's lithosphere and high-resolution images of paterae from Galileo and Voyager. In the post-Galileo view of Io's lithosphere, the upper layers are cold, composed of interbedded sulfur and sulfur dioxide ice and frost mixed with cooled silicate lava flows and pyroclastic deposits. The temperature increases dramatically as you near the base of the lithosphere, but for much of its thickness, it is fairly close to the surface temperature. This is the result of the heat-pipe advection theory that Io's internal heat is release almost entirely by volcanic activity, first proposed by O'Reilly and Davies 1981 and more extensively discussed in my post about Kirchoff et al. 2009. So there is very little convection of heat from the asthenosphere in the lithosphere. Anyways, what this means is that a crust with interbedded sulfur and silicate rich materials is actually remarkably stable and can form topography at least 10 km in height. The lower you get in the lithosphere, the more volatiles are driven out, and the more silicate-rich it becomes. This allows magma to ascend through this volatile-poor region. When it reaches the volatile-rich upper lithosphere, the magma eventually can find it difficult to ascend further as it becomes neutrally buoyant. The magma can then stall out and form a sill, an intrusive magma body that forms parallel to the pre-existing crustal layers.
Looking at high-resolution images of Io, particularly between Chaac and Camaxtli Paterae on the satellite's anti-jovian hemisphere, Keszthelyi et al. 2004 came up with a model for how these sills can develop into paterae. As magma continues to be injected into the forming sill, the lithosphere nearby becomes heated, particularly the volatiles in the immediate vicinity. Over a period of hundreds to a few tens of thousands of years, these volatiles (like sulfur and sulfur dioxide) melt and rise to the surface, forming sulfurous flows (along with some silicate magma that may also reach the surface), as well as move laterally through the sub-surface away from the growing sill. This maybe what is going on now at Sobo Fluctus. This movement of volatiles causes a partial collapse above the sill, forming a shallow patera similar to Grannos Patera. As the sill grows and more sulfur melts, the patera gets deeper, down to level of the partially molten S/SO2 near the sill. This causes the kind of sulfur flows you see at Balder Patera or Ababinili Patera. More basaltic magma reach the patera floor as well, forming silicate flows on the floor of the patera, like at Camaxtli Patera.
Eventually, more and more ground is "eaten" away above the sill so that it becomes "unroofed", with the top of the sill becoming the new floor of the patera. So now you see things like silicate lava lakes, for example, like you see at Loki Patera or Tupan Patera (see top of post). "Islands" in these lava lake paterae may be the result of sufficient cooling on the roof of the sill during its formation to create a thick crust near its middle (or at least the part right over the conduit at the bottom of the sill). As paterae remain active, they not only become deeper, until they reach the level of the underlying sill, but they also grow laterally as the silicate flows on the floor and the growth of the sill undermine the sulfur-rich walls of the patera. Keszthelyi et al. suggests that this would explain steep wall as this would remain an active process as long as the volcano itself remains active. We've even seen some paterae degrade mountains in this process, like at Gish Bar Patera, which is being "eaten" by Gish Bar Patera and Estan Patera.
The patera formation model presented in Keszthelyi et al. 2004 seems to explain not only the general morphology of paterae on Io, but also variations in this morphology from volcano to volcano, which seem to be the result of magma supply and age. The model also explains why no new paterae have formed since the beginning of spacecraft observations in 1979 as their formation is a much more gradual process than the 30-year time period of observations.
Link: A post-Galileo view of Io's interior [dx.doi.org]
OK Go Defies Gravity
8 hours ago | http://www.gishbartimes.org/2009/03/formation-of-paterae-on-io.html |
4.3125 | Memory Storage and Management
When an operating system manages the computer's memory, there are two broad tasks to be accomplished:
- Each process must have enough memory in which to execute, and it can neither run into the memory space of another process nor be run into by another process.
- The different types of memory in the system must be used properly so that each process can run most effectively.
The first task requires the operating system to set up memory boundaries for types of software and for individual applications.
This content is not compatible on this device.
As an example, let's look at an imaginary small system with 1 megabyte (1,000 kilobytes) of RAM. During the boot process, the operating system of our imaginary computer is designed to go to the top of available memory and then "back up" far enough to meet the needs of the operating system itself. Let's say that the operating system needs 300 kilobytes to run. Now, the operating system goes to the bottom of the pool of RAM and starts building up with the various driver software required to control the hardware subsystems of the computer. In our imaginary computer, the drivers take up 200 kilobytes. So after getting the operating system completely loaded, there are 500 kilobytes remaining for application processes.
When applications begin to be loaded into memory, they are loaded in block sizes determined by the operating system. If the block size is 2 kilobytes, then every process that's loaded will be given a chunk of memory that's a multiple of 2 kilobytes in size. Applications will be loaded in these fixed block sizes, with the blocks starting and ending on boundaries established by words of 4 or 8 bytes. These blocks and boundaries help to ensure that applications won't be loaded on top of one another's space by a poorly calculated bit or two. With that ensured, the larger question is what to do when the 500-kilobyte application space is filled.
In most computers, it's possible to add memory beyond the original capacity. For example, you might expand RAM from 1 to 2 gigabytes. This works fine, but can be relatively expensive. It also ignores a fundamental fact of computing -- most of the information that an application stores in memory is not being used at any given moment. A processor can only access memory one location at a time, so the vast majority of RAM is unused at any moment. Since disk space is cheap compared to RAM, then moving information in RAM to hard disk can greatly expand RAM space at no cost. This technique is called virtual memory management.
Disk storage is only one of the memory types that must be managed by the operating system, and it's also the slowest. Ranked in order of speed, the types of memory in a computer system are:
- High-speed cache -- This is fast, relatively small amounts of memory that are available to the CPU through the fastest connections. Cache controllers predict which pieces of data the CPU will need next and pull it from main memory into high-speed cache to speed up system performance.
- Main memory -- This is the RAM that you see measured in megabytes when you buy a computer.
- Secondary memory -- This is most often some sort of rotating magnetic storage that keeps applications and data available to be used, and serves as virtual RAM under the control of the operating system.
The operating system must balance the needs of the various processes with the availability of the different types of memory, moving data in blocks (called pages) between available memory as the schedule of processes dictates. | http://computer.howstuffworks.com/operating-system7.htm |