question
stringlengths
18
1.2k
facts
stringlengths
44
500k
answer
stringlengths
1
147
What is the strong inelastic material found in a human tendon?
Tendon | Article about tendon by The Free Dictionary Tendon | Article about tendon by The Free Dictionary http://encyclopedia2.thefreedictionary.com/tendon Related to tendon: Achilles tendon tendon, tough cord composed of closely packed white fibers of connective tissue that serves to attach muscles to internal structures such as bones or other muscles. Sometimes when the muscle involved is thin and wide, the tendon is not a cord but a thin sheet known as an aponeurosis. The purpose of the tendon in attaching muscle to bone is to enable the power of the muscle to transfer over a distance. For example, when one wants to move a finger, specific muscles in the forearm contract and pull on ligaments that in turn pull the finger bones to produce the desired action. Tendon A cord connecting a muscle to another structure, often a bone. A tendon is a passive material, lengthening when the tension increases and shortening when it decreases. This characteristic contrasts with the active behavior of muscle. Away from its muscle, a tendon is a compact cord. At the muscle, it spreads into thin sheets called aponeuroses, which lie over and sometimes within the muscle belly. The large surface area of the aponeuroses allows the attachment of muscle fibers with a total cross-sectional area that is typically 50 times that of the tendon. See Muscle Tendons are living tissues that contain cells. In adult tendons, the cells occupy only a very small proportion of the volume and have a negligible effect on the mechanical properties. Like other connective tissues, tendon depends on the protein collagen for its strength and rigidity. The arrangement of the long, thin collagenous fibers is essentially longitudinal, but incorporates a characteristic waviness known as crimp. The fibers lie within a matrix of aqueous gel. Thus, tendon is a fiber-reinforced composite (like fiberglass), but its collagen is much less stiff than the glass and its matrix is very much less stiff than the resin. See Collagen The function of tendons is to transmit force. They allow the force from the muscle to be applied in a restricted region. For example, the main muscles of the fingers are in the forearm, with tendons to the fingertips. If the hand had to accommodate these muscles, it would be too plump to be functional. Tendon extension can also be significant in the movement of a joint. For example, the tendon which flexes a human thumb joint is about 7 in. (170 mm) long. The maximum force from its muscle stretches this tendon about 0.1 in. (2.9 mm), which corresponds to rotation of the joint through an angle of about 21°. See Joint (anatomy) Some tendons save energy by acting as springs. In humans, the Achilles tendon reduces the energy needed for running by about 35%. This tendon is stretched during the first half of each step, storing energy which is then returned during takeoff. This elastic energy transfer involves little energy loss, whereas the equivalent work done by muscles would require metabolic energy in both stages. See Connective tissue , Muscular system Tendon   a cord consisting of connective tissue; a tendon attaches a muscle to a bone and causes a contracting muscle to move. Tendons are composed of thick, strong, inelastic collagen fibers. The fibers are continuous with the muscle fibers at one end and are interwoven into the periosteum at the other end. Tendons vary in shape; those attached to long muscles are cylindrical, and those attached to transverse muscles are flattened and are termed aponeuroses. The centrum tendineum and galea aponeurotica are distinctive in shape. Some tendons, for example, those of the long flexor muscles of the fingers and toes, are surrounded by a synovial membrane that releases a fluid enabling the tendons to slide easily during motion. Tendon function may be impaired by inflammation or injury. Diseases of the tendons and synovial bursae are treated conservatively. Surgery is indicated when tendons are ruptured as a result of injury. tendon
Collagen
What material forms the hard outermost layer of a human tooth?
IV. Myology. 3. Tendons, Aponeuroses, and Fasciæ. Gray, Henry. 1918. Anatomy of the Human Body. Henry Gray (1821–1865).  Anatomy of the Human Body.  1918.   3. Tendons, Aponeuroses, and Fasciæ   Tendons are white, glistening, fibrous cords, varying in length and thickness, sometimes round, sometimes flattened, and devoid of elasticity. They consist almost entirely of white fibrous tissue, the fibrils of which have an undulating course parallel with each other and are firmly united together. When boiled in water tendon is almost completely converted into gelatin, the white fibers being composed of the albuminoid collagen, which is often regarded as the anhydride of gelatin. They are very sparingly supplied with bloodvessels, the smaller tendons presenting in their interior no trace of them. Nerves supplying tendons have special modifications of their terminal fibers, named organs of Golgi.    2   The tendons and aponeuroses are connected, on the one hand, with the muscles, and, on the other hand, with the movable structures, as the bones, cartilages ligaments, and fibrous membranes (for instance, the sclera). Where the muscular fibers are in a direct line with those of the tendon or aponeurosis, the two are directly continuous. But where the muscular fibers join the tendon or aponeurosis at an oblique angle, they end, according to Kölliker, in rounded extremities which are received into corresponding depressions on the surface of the latter, the connective tissue between the muscular fibers being continuous with that of the tendon. The latter mode of attachment occurs in all the penniform and bipenniform muscles, and in those muscles the tendons of which commence in a membranous form, as the Gastrocnemius and Soleus.    3   The fasciæ are fibroareolar or aponeurotic laminæ, of variable thickness and strength, found in all regions of the body, investing the softer and more delicate organs. During the process of development many of the cells of the mesoderm are differentiated into bones, muscles, vessels, etc.; the cells of the mesoderm which are not so utilized form an investment for these structures and are differentiated into the true skin and the fasciæ of the body. They have been subdivided, from the situations in which they occur, into superficial and deep.    4   The superficial fascia is found immediately beneath the integument over almost the entire surface of the body. It connects the skin with the deep fascia, and consists of fibroareolar tissue, containing in its meshes pellicles of fat in varying quantity. Fibro-areolar tissue is composed of white fibers and yellow elastic fibers intercrossing in all directions, and united together by a homogeneous cement or ground substance, the matrix.    5   The cells of areolar tissue are of four principal kinds: (1) Flattened lamellar cells, which may be either branched or unbranched. The branched lamellar cells are composed of clear cytoplasm, and contain oval nuclei; the processes of these cells may unite so as to form an open network, as in the cornea. The unbranched cells are joined edge to edge like the cells of an epithelium; the “tendon cells,” presently to be described, are examples of this variety. (2) Clasmatocytes, large irregular cells characterized by the presence of granules or vacuoles in their protoplasm, and containing oval nuclei. (3) Granule cells (Mastzellen), which are ovoid or spheroidal in shape. They are formed of a soft protoplasm, containing granules which are basophil in character. (4) Plasma cells of Waldeyer, usually spheroidal and distinguished by containing a vacuolated protoplasm. The vacuoles are filled with fluid, and the protoplasm between the spaces is clear, with occasionally a few scattered basophil granules. FIG. 377– Subcutaneous tissue from a young rabbit. Highly magnified. (Schäfer.) ( See enlarged image )     In addition to these four typical forms of connective-tissue corpuscles, areolar tissue may be seen to possess wandering cells, i.e., leucocytes which have emigrated from the neighboring vessels; in some instances, as in the choroid coat of the eye cells filled with granules of pigment (pigment cells) are found.    7   The cells lie in spaces in the ground substance between the bundles of fibers, and these spaces may be brought into view by treating the tissue with nitrate of silver and exposing it to the light. This will color the ground substance and leave the cell-spaces unstained.    8   Fat is entirely absent in the subcutaneous tissue of the eyelids, of the penis and scrotum, and of the labia minora. It varies in thickness in different parts of the body; in the groin it is so thick that it may be subdivided into several laminæ. Beneath the fatty layer there is generally another layer of superficial fascia, comparatively devoid of adipose tissue, in which the trunks of the subcutaneous vessels and nerves are found, as the superficial epigastric vessels in the abdominal region, the superficial veins in the forearm, the saphenous veins in the leg and thigh, and the superficial lymph glands. Certain cutaneous muscles also are situated in the superficial fascia, as the Platysma in the neck, and the Orbicularis oculi around the eyelids. This fascia is most distinct at the lower part of the abdomen, perineum, and extremities; it is very thin in those regions where muscular fibers are inserted into the integument, as on the side of the neck, the face, and around the margin of the anus. It is very dense in the scalp, in the palms of the hands, and soles of the feet, forming a fibro-fatty layer, which binds the integument firmly to the underlying structures.    9   The superficial fascia connects the skin to the subjacent parts, facilitates the movement of the skin, serves as a soft nidus for the passage of vessels and nerves to the integument, and retains the warmth of the body, since the fat contained in its areolæ is a bad conductor of heat.         The deep fascia is a dense, inelastic, fibrous membrane, forming sheaths for the muscles, and in some cases affording them broad surfaces for attachment. It consists of shining tendinous fibers, placed parallel with one another, and connected together by other fibers disposed in a rectilinear manner. It forms a strong investment which not only binds down collectively the muscles in each region, but gives a separate sheath to each, as well as to the vessels and nerves. The fasciæ are thick in unprotected situations, as on the lateral side of a limb, and thinner on the medial side. The deep fasciæ assist the muscles in their actions, by the degree of tension and pressure they make upon their surfaces; the degree of tension and pressure is regulated by the associated muscles, as, for instance, by the Tensor fasciæ latæ and Glutæus maximus in the thigh, by the Biceps in the upper and lower extremities, and Palmaris longus in the hand. In the limbs, the fasciæ not only invest the entire limb, but give off septa which separate the various muscles, and are attached to the periosteum: these prolongations of fasciæ are usually spoken of as intermuscular septa.
i don't know
Snowflakes are symmetrical. How many sides do they have?
Frequently Asked Questions about Snow Crystals    ... Things you always wanted to know about snow crystals ... Why do snow crystals form in such complex and symmetrical shapes?    To see why snowflakes look like they do, consider the life history of a single snow crystal, as shown in the diagram at right.  (Click on the picture for a larger view.)    The story begins up in a cloud, when a minute cloud droplet first freezes into a tiny particle of ice.  As water vapor starts condensing on its surface, the ice particle quickly develops facets , thus becoming a small hexagonal prism .  For a while it keeps this simple faceted shape as it grows.    As the crystal becomes larger, however, branches begin to sprout from the six corners of the hexagon (this is the third stage in the diagram at right).  Since the atmospheric conditions (e. g. temperature and humidity) are nearly constant across the small crystal, the six budding arms all grow out at roughly the same rate.    While it grows, the crystal is blown to and fro inside the clouds, so the temperature it sees changes randomly with time.  But the crystal growth depends strongly on temperature (as is seen in the morphology diagram ).  Thus the six arms of the snow crystal each change their growth with time.  And because all six arms see the same conditions at the same times, they all grow about the same way.  The end result is a complex, branched structure that is also six-fold symmetric.  And note also that since snow crystals all follow slightly different paths through the clouds, individual crystals all tend to look different.    The story is pretty simple, really, nicely encapsulated in the diagram above.  And it's even a bit amazing, when you stop to ponder it -- the whole complex, beautiful, symmetrical structure of a snow crystal simply arises spontaneously, quite literally out of thin air, as it tumbles through the clouds.  What synchronizes the growth of the six arms?    Nothing.  The six arms of a snow crystal all grow independently, as described in the previous section.  But since they grow under the same randomly changing conditions, all six end up with similar shapes.    If you think this is hard to swallow, let me assure you that the vast majority of snow crystals are not very symmetrical.  Don't be fooled by the pictures -- irregular crystals (see the Guide to Snowflakes ) are by far the most common type.  If you don't believe me, just take a look for yourself next time it snows.  Near-perfect, symmetrical snow crystals are fun to look at, but they are not common. Why do snow crystals have six arms?    The six-fold symmetry of a snow crystal ultimately derives from the hexagonal geometry of the ice crystal lattice .  But the lattice has molecular dimensions, so it's not trivial how this nano-scale symmetry is transferred to the structure of a large snow crystal.    The way it works is through faceting .  No long-range forces are necessary to form facets; they appear simply because of how the molecules hook up locally in the lattice (see Crystal Faceting for how this works).  From faceting we get hexagonal prisms , which are large structures with six-fold symmetry.  Eventually arms sprout from the corners of a prism, and six corners means six arms.    Faceting is how the geometry of the water molecule is transferred to the geometry of a large snow crystal. Why is snow white?    No, it's not a white dye.  Snow is made of ice crystals, and up close the individual crystals look clear, like glass.  A large pile of snow crystals looks white for the same reason a pile of crushed glass looks white.  Incident light is partially reflected by an ice surface, again just as it is from a glass surface.  When you have a lot of partially reflecting surfaces, which you do in a snow bank, then incident light bounces around and eventually scatters back out.  Since all colors are scattered roughly equally well, the snow bank appears white.    In fact, the ice does absorbs some light while it's bouncing around, and red light is absorbed more readily than blue light.  Thus, if you look inside a snow bank you can sometimes see a blue color.  I took a few pictures of this once in the California mountains . Is it ever too cold to snow?    In principle it can snow at any temperature below freezing.  It snows at the South Pole even though the temperature is rarely above -40 C (-40 F).     In more hospitable climates, however, it doesn't snow so much when the temperature is below around -20 C (-4 F).  When a parcel of moist air cools, it starts producing snow before it gets that cold.  By the time the temperature drops to -20 C, the snow has already fallen and the air is pretty dry.  The clouds that remain are made of ice crystals, and these don't produce much snow (see the Snowflake Primer for how clouds make snow). Why study the physics of snowflakes? There are several good reasons for studying how snowflakes form.    First of all, crystals are useful in all sorts of applications, and we would like to know how to grow them better. Computers are carved out of silicon wafers, which in turn are cut from large silicon crystals.  Many other semiconductor crystals are used for other electronics applications.  Lasers are also made from crystals, and a variety of optical crystals are used extensively in telecommunications. Artificial diamond crystals are used in machining and grinding.  The list of industrial crystals is actually quite long.    By studying the physics of snowflakes, we learn about how molecules condense to form crystals.  This basic knowledge applies to other materials as well.  As we learn more about the physics and chemistry of how crystals grow, maybe someday we can use that knowledge to help fabricate new and better types of crystalline materials.    This is the way that basic science becomes useful -- figure out how things work the best you can, and later on use that knowledge in unforeseen applications.     Another good reason to study snowflakes is to better understand structure formation and self-assembly. Humans usually make a thing by starting with a block of material and carving from it.  Computers, for example, are made by patterning intricate circuits on silicon wafers.    Nature uses a completely different approach to manufacturing. In nature, things simply assemble themselves. Cells grow and divide, forming complex organisms. Even extremely sophisticated computers (such as your brain) arise from self-assembly.  Your DNA does not contain nearly enough information to guide the placement of every cell in your body.  Most of that structure simply arises spontaneously as you grow, following poorly understood rules.  Biological self-assembly is an extremely complex process, and we do not understand much about how it works at a fundamental level.    The snowflake is an very simple example of self-assembly. There is no blueprint or genetic code that guides the growth of a snowflake, yet marvelously complex structures appear, quite literally out of thin air. As we understand better how snowflakes form, we learn about self-assembly.  As the electronics industry pushes toward ever smaller devices, it is likely that self-assembly will play an increasingly important role in manufacturing.  Learning about self-assembly from the ground up will probably by useful in this context also.  Again, in the study of basic science we try to solve the easy problems first (like snowflakes), and later use that knowledge to develop engineering applications we cannot yet foresee.    History has shown over and over that the fundamental knowledge gained by doing basic science (without worrying about what it's good for) often leads to useful engineering applications.  There is a great deal of interesting physics, chemistry, and materials science wrapped up in snowflake growth, and studying the lowly snowflake may indeed teach us something useful.    Now, all that being said, my personal motivation is not from potential practical applications.  I am not trying to make better artificial snow, better ice for Olympic skating, bigger diamonds, faster computers, or anything like that.  I believe that basic science can and should be pursued for its own sake.  Scientists try to understand everything they can about how nature works, on the premise that all knowledge is potentially useful.  Einstein didn't worry about the practical applications of relativity -- he just wanted to understand how nature worked.  Maxwell didn't think about cell phone technology when he worked out the laws of electromagnetism -- he just wanted to understand how nature worked.    I want to figure out the underlying physics of snowflake growth because this is an interesting puzzle in molecular dynamics.  I would like to understand the fundamental physics of how molecules jostle into place to form a crystal.  How fast does this happen?  How does it change with temperature? What happens if there are chemical impurities on the ice surface?  There are many such questions, and ice is an interesting case study in crystal growth.  These remarkable structures simply fall from the sky -- we ought to understand how they are formed!  With over six billion people on the planet, surely a few of us can be spared to ponder the subtle mysteries of snowflakes.  Who else is working on the science of snow crystals?    Not many people are thinking about why snow crystals look like they do, and most of them are meteorologists looking at how snow crystal formation affects the properties of cold clouds.  Here is a partial list (in no particular order), along with contact information (in the links): Charles Knight , National Center for Atmospheric Research
6
What name is given to an atomic particle carrying a negative charge?
Symmetry of Snowflakes Symmetry of Snowflakes surface bump...................dendrite ...........complex snowflake One thing you notice right away about snow crystals is that they form some elaborate and complex shapes -- often displaying lacy, branching structures.  Where does this complexity come from?  After all, snow crystals are nothing more than ice which has condensed from water vapor.  How does the simple act of water vapor freezing into ice produce such intricate designs ?    The answers to these questions lie in just how water molecules travel through the air to condense onto a growing snow crystal.   The water molecules have to diffuse through the air to reach the crystal, and this diffusion slows their growth.  The farther water molecules have to diffuse through the air, the longer it takes them to reach the growing crystal.    So consider a flat ice surface that is growing in the air.  If a small bump happens to appear on the surface, then the bump sticks out a bit farther than the rest of the crystal.  This means water molecules from afar can reach the bump a bit quicker than they can reach the rest of the crystal, because they don't have to diffuse quite as far.    With more water molecules reaching the bump, the bump grows faster.  In a short time, the bump sticks out even farther than it did before, and so it grows even faster.  We call this a branching instability -- small bumps develop into large branches, and bumps on the branches become sidebranches.  Complexity is born.   This instability is a major player in producing the complex shapes of snow crystals.And since the ambient atmospheric conditions are nearly identical across the crystal, all six budding arms grow at roughly the same rate.     The temperature seen by the snow crystal is not constant in time, however, since the crystal is being blown about and is thus carried over great distances in a cloud.  But the crystal growth rates depend strongly on temperature.  Thus the six arms of the snow crystals each change their growth with time, reflecting the ever-changing conditions in the cloud.  And because each arm sees the same conditions, each arms grows the same way.    When the branching instability applies itself over and over again to a growing snow crystal, the result is called an ice dendrite.   The word dendrite means "tree-like," and stellar dendrite snow crystals are common.      We can change diffusion in the lab and see how dendrites change.  If one grows snow crystals in air below atmospheric pressure, they have fewer branches.   This is because diffusion doesn't limit the growth so much at lower air pressures, so the branching instability is not so strong.  At higher pressures, more branches appear.    The growth of snow crystals depends on a balance between faceting and branching.  Faceting tends to make simple flat surfaces, while branching tends to make more complex structures.  The interplay between faceting and branching is a delicate one, depending strongly on things like temperature and humidity.  This means snow crystals can grow in many different ways, resulting in the great diversity we see in snow crystal forms.    So that's the story.  The intricate shape of a single arm is determined by the ever-changing conditions experienced by the crystal as it falls.  Because each arm experiences the same conditions, however, the arms tend to look alike.  The end result is a large-scale, complex, six-fold symmetric snow crystal.  And since snow crystals all follow slightly different paths through the clouds, individual crystals all tend to all look different.
i don't know
DNA is found in which part of the cell?
Where is DNA Found? Learn About DNA in Human Cells as well as in Plants, Animals, Bacteria & Outer Space! Human Cells Nucleus DNA can be found inside the nucleus of every cell, apart from red blood cells. It's tightly wound and spread throughout the 46 chromosomes. One set of 23 chromosomes is inherited from each parent. Inside the chromosomes the DNA exists as genes. A gene is a sequence of DNA, that by and large, though there are exceptions, codes for one protein. There is large volume of so-called 'junk DNA' that apparently serves no purpose, although there are bodies of work that are starting to show otherwise.   Mitochondria These are tiny organelles that are the energy factories of the cell. They contain a small amount of DNA that is distinct from nuclear DNA. For the most part mitochondrial DNA is inherited from the mother in sexually reproducing species. slide 2 of 6 Plants and Animals In plants and animals DNA is also found in the cell nucleus . The DNA of all animals is very similar. The major differences are in the number of chromosomes and genes, and the arrangement of base pairs within these genes. slide 3 of 6 Viral DNA A virus is essentially a very simple particle with nucleic acid at its core and a few essential proteins, such as its protein coat. The nucleic acid can be either DNA or RNA, depending on the kind of virus it is. The DNA can also be either single stranded or double stranded. Examples of viruses with a double stranded DNA molecule are; Herpes simplex virus and the small pox virus. Examples of viruses with a single stranded DNA molecule are; Adeno-associated virus and the M13 bacteriophage - it infects bacteria. Viruses do not possess nuclei. slide 4 of 6 Bacterial DNA The DNA is not enclosed inside a nucleus. It's free-floating as it is inside a virus. It's usually a single coil of DNA. In some bacteria there's additional DNA and this is located in structures known as plasmids. The DNA here is not essential to the survival of the bacterium. slide 5 of 6 DNA in Space Well .... every time an astronauts blasts off. Digitized versions of personal DNA sequences will soon be sent up as part of a publicity drive to promote the Archon X $10 million genome sequencing prize. Among those whose DNA will be digitized are physicist and best-selling author Prof Stephen Hawking and the comedian Stephen Colbert. slide 6 of 6
Nucleus
By which name is the drug acetylsalicylic acid better known?
What is DNA? - Genetics Home Reference What is DNA? What is DNA? DNA, or deoxyribonucleic acid, is the hereditary material in humans and almost all other organisms. Nearly every cell in a person’s body has the same DNA. Most DNA is located in the cell nucleus (where it is called nuclear DNA), but a small amount of DNA can also be found in the mitochondria (where it is called mitochondrial DNA or mtDNA). The information in DNA is stored as a code made up of four chemical bases: adenine (A), guanine (G), cytosine (C), and thymine (T). Human DNA consists of about 3 billion bases, and more than 99 percent of those bases are the same in all people. The order, or sequence, of these bases determines the information available for building and maintaining an organism, similar to the way in which letters of the alphabet appear in a certain order to form words and sentences. DNA bases pair up with each other, A with T and C with G, to form units called base pairs. Each base is also attached to a sugar molecule and a phosphate molecule. Together, a base, sugar, and phosphate are called a nucleotide. Nucleotides are arranged in two long strands that form a spiral called a double helix. The structure of the double helix is somewhat like a ladder, with the base pairs forming the ladder’s rungs and the sugar and phosphate molecules forming the vertical sidepieces of the ladder. An important property of DNA is that it can replicate, or make copies of itself. Each strand of DNA in the double helix can serve as a pattern for duplicating the sequence of bases. This is critical when cells divide because each new cell needs to have an exact copy of the DNA present in the old cell. DNA is a double helix formed by base pairs attached to a sugar-phosphate backbone. Credit: U.S. National Library of Medicine For more information about DNA: The National Human Genome Research Institute fact sheet Deoxyribonucleic Acid (DNA) provides an introduction to this molecule. Information about the genetic code and the structure of the DNA double helix is available from GeneEd.
i don't know
How many legs has an insect?
How many legs does an insect have? | Reference.com How many legs does an insect have? A: Quick Answer An insect has six legs. Insects' legs are jointed, and the movement of these joints is controlled by a combination of partial musculature and passive biomechanical non-muscular structures. Some insects also have a clawlike structure on the last segments of their legs. Full Answer All insects also have three major body regions, which typically consist of a head, a thorax and an abdomen. All insects also have bilateral symmetry. Insects begin their lives as eggs and undergo a metamorphosis before becoming adults. Winged insects have either one pair of wings (such as a housefly or a mosquito) or two pairs of wings (such as a bee or a dragonfly).
six
Which flower has the same name as a diaphragm in the eye?
General Facts About Insects and Bugs | Scholastic Back General Facts About Insects and Bugs When is an insect not a bug? Do all insects bite? Which one is most poisonous? Experts answer 20 common questions. Grades 3–5, 6–8 The following questions were answered by zoo biologist Ellen Dierenfeld and entomologists John VanDyk and Steve Kutcher.   Q: Is there a difference between an insect and a bug? A: Yes, there is a difference. A bug is a certain type of insect. Some examples you might be familiar with are the boxelder bug, milkweed bug, assassin bug, and stink bug. True bugs have a stylet (a mouth shaped like a straw) that they use to suck plant juices from plants. The assassin bugs use their stylets to suck blood from other insects. The front wings of true bugs are thickened and colored near where they are attached to the insect's body, and are clearer and thinner towards the hind end of the wing. The hind wings are usually clear and tucked underneath the front wings.   Q: What is the largest insect? A: In the book Beetles by Bernard Klaustnizer, there is a beetle called the South American longhorn beetle (Tytanus giganteus) that measures 25 cm! The heaviest insect is probably the African goliath beetle (Megasoma elephas), weighing up to 3.4 oz. And the longest insect is a huge stick insect (Pharnacia serritypes). The females can be over 36 cm in length!!   Q: Is there an insect that is worth money? A: There are many, many insects that are worth money. For example, the pollination work done for free by insects would cost billions of dollars every year. Think about how much honey costs! Those bees are worth a lot of money. And insects like the praying mantis or ladybird beetle happily take care of eating harmful insects, saving money that could be spent on pesticides. There are also silk moths that produce silk, insects that produce shellac, and some insects that are canned and eaten! Make sure you don't let the reputation of a few harmful insects prevent you from noticing all the good ones.   Q: How do insects grow? A: Insects have their skeletons on the outside, with their soft parts inside. That makes it hard for them to grow. Every time they want to become bigger, they have to break out of their skin and swell up to their new size before their new skin hardens. This is called molting. This means that once the insect is at its final size (adult form), it can't grow any bigger! So the butterflies and moths that you see flying around won't be any bigger tomorrow than they are today!   Q: What do insects eat? A: Just about anything! There are so many different insects and each one may eat something different. Lots of them eat plants. Some of them eat other insects. Some of them eat blood (like mosquitoes). Nectar from plants is also a popular food. And many insects (like cockroaches or ants) will be happy to polish off that cookie you dropped on the floor!   Q: What's the most poisonous insect? A: According to the University of Florida Book of Insect Records, the most poisonous insects are in the order Hymenoptera (wasp, bees, and ants) and the ones with the most toxic venom are certain harvester ants.   Q: What's the fastest insect? A: Sphinx moths, or hawk moths, have been measured at 53 km/h. However, a horsefly (Hybomitra hinei wrighti) was recently clocked at 145 km/h! More research needs to be done in order to determine the fastest insect.   Q: Who discovered insects and where did the word "insect" come from? A: I'm not sure anyone "discovered" insects, in the same way we think about discovering electricity or magnetic fields. But Plato was aware of insects, way back in the ancient Greek era. Insects are referred to in the Bible. Linnaeus started to catalog all the insects he could find. As for the name "insect," it is from Latin; the name was originally given to certain small animals, whose bodies appear cut in, or almost divided.   Q: What insect lives the longest? A: A queen termite has been known to live 50 years and there are, of course, the 17-year locusts. Most bugs live less than a year and are seasonal. However, some wood beetles can emerge from wood where they live after as long as 40 years!! In one recorded case, the beetles came out of wood that had long ago been cut down and made into a bookshelf!   Q: What is the smallest insect? A: I'm not sure what the smallest insect is (I had one here somewhere, but I can't seem to find it...) but the smallest insect eggs belong to a member of the family Tachinidae, a group of parasitic flies. These eggs are usually only 0.02 to 0.2 mm long.   Q: Do all insects bite? A: There are lots of insects that don't bite people but do bite plants or other insects! Insects have different kinds of mouthparts. There are mouthparts for biting/chewing, strawlike mouthparts for sucking, and razor-sharp mouthparts for biting people. The vast majority of insects, however, do not bite people. They are content to eat plants, or nectar, or other insects.   Q: How many insects are in the world? A: If you are talking about the number of different kinds of insects in the world, Erik J. van Nieukerken has made a scientific estimate that there are 1,017,018 species of insects in the world. Wow! That means you could spend your whole life looking at different kinds of insects and never see them all.   Q: Why do insects like light? A: No one really knows. Most scientists think that bright lights confuse the insects'guidance systems so they can't fly straight any more.   Q: Why do insects have six legs instead of five or seven? A: One can get around efficiently on six legs. It is harder if you use five, because that's an odd number. You would have one leg stuck in the air while the others are running, or going down all by itself. If you have a chance, watch an insect walking and pay attention to how it uses its legs. Put another way, think how much more difficult it would be for you to walk if you had three legs!   Q: Why do insects have three parts to their bodies? A: That's a difficult question to answer. Maybe we can turn it around and ask, why don't you have three parts to your body? Or why don't you have a hard shell instead of soft skin? The answer is, no one knows. That is the way things have happened. We call animals with certain characteristics, like three main body parts, antennae, spiracles, etc., "insects." If they had eight legs and two main body parts, we would call them "spiders."   Q: Do insects have blood and do they bleed when they are hurt? A: Insects have blood, but it's not like our blood. Our blood is red because it has hemoglobin, which is used to carry oxygen to where it is needed in the body. Insects get oxygen from a complex system of air tubes that connect to the outside through openings called spiracles. So instead of carrying oxygen, their blood carries nutrients from one part of the body to another. They do bleed when they are hurt, and their blood can clot so they can recover from minor wounds.   Q: Why do insects drown in water? A: Not all insects drown in water. In fact, quite a few live there for at least part of their lives. Insects breathe through holes in the sides of their bodies. If they can't get air in through the holes, they will suffocate. That's why insects that are not specialized for living in water will die in water. But dragonfly nymphs, mosquito larvae, and water beetles all live in water quite happily!   Q: How do insects eat? A: Insects eat by either chewing their food (like grasshoppers and caterpillars), or sucking it up (like aphids, stinkbugs and mosquitoes). Take a close look at the mouthparts of an insect sometime. There are lots of parts (I think I would get confused trying to eat with so many parts!).   Q: Which insects pinch? A: Many insects that have biting/chewing mouthparts will nip you if you pick them up. Others, such as lady beetles, don't mind being picked up and will just fly away if they want to.   Q: Which insects live on trees? A: There are so many different kinds of insects that live in, on, and under trees that there is a whole branch (no pun intended!) of entomology called forest entomology that deals with these insects. In many old-growth forests (and the rain forests) one tree is an entire ecosystem  —  like a separate world. ×
i don't know
Which animals are arthropods and have eight legs?
All About Arthropods All About Arthropods © Contributed by Leanne Guenther What is an arthropod?  You live with them almost everyday, even in the very cold winter months! They are everywhere and are the largest animal phylum -- about 85% of all known animals in the world are part of this class.   There are far more species of arthropods than there are species in all the other phylums(phyla) combined. Mosquito Photo Source:  Corel Web Gallery Grasshopper Photo Source:  Corel Web Gallery They are spiders, insects, centipedes, mites, ticks, lobsters, crabs, shrimp, crayfish, krill, barnacles, scorpions and many, many others.   Can you see two segments? Photo Source:  Corel Web Gallery Can you see three segments? Photo Source:  Corel Web Gallery The easiest way to tell an arthropod from any other animal is to see if they have: 1) A segmented body. This means that they will have a body made up of more than one part.  Spiders have two segments and flies have three segments. centipede 2) Many jointed legs or limbs. Spiders have 8 legs, millipedes can have...  Hundreds!    Photo Source:  Corel Web Gallery 3)  An exoskeleton. This is an external skeleton. Like armor, it protects the arthropods body.  When arthropods are born the exoskeleton is soft but hardens quickly and it can be shed as the creature grows.  Arthropods are invertebrates; which means that they do not have a backbone. Photo Source:  Corel Web Gallery 4)  Cold blooded Arthropods are cold blooded -- which means, their body temperature depends on the temperature of the environment surrounding them. Photo Source:  Corel Web Gallery Arthropods are some of the most interesting animals in the world! They fly, they creep, and they crawl.  They live on land, in ponds and in the ocean.  From ants to bumblebees, crabs to crayfish, spiders to centipedes -- which are your favorites!?   Scientific stuff:  Arthropods include eleven animal classes Subphylum Chelicerata Class Merostomata (horseshoe crabs, eurypterids)  Class Pycnogonida (sea spiders)  Class Arachnida (spiders, ticks, mites)  Subphylum Crustacea Class Branchiopoda (fairy shrimp, water fleas)  Class Maxillopoda (ostracods, copepods, barnacles)  Class Malacostraca (isopods, amphipods, krill, crabs, shrimp)  Subphylum Uniramia
Arachnid
Which is the modern scientific unit of work and energy?
KLRU: Backyard Bugs One defining characteristic is how many legs. Do you know the difference between an arachnid and an insect? Have you ever wondered how many legs spiders have? For answers to these questions and more keep reading... 6 Legs or 8 Legs Arthropods are a group of invertebrates (animals with no backbone) that have jointed legs, segmented bodies, and hard, protective coverings called exoskeletons. Arthropods include such animals as insects, spiders, ticks, centipedes, millipedes, crayfish, lobsters, mites, and scorpions. One class of arthropods is the arachnids which include spiders, scorpions, and mites. Spiders have two main body parts and eight legs. Insects make up another class of arthropods. Insects are made up of three main body parts. The head is the first of an insect's three main body parts. It contains the antennae (sensory organs used to smell, taste, feel, and sometimes hear), compound eyes which are made up of many tiny units, and the mouthparts including the mandibles, or jaws, of an insect. The thorax is the middle of an insect's three body parts. The six legs and wings are attached to it. The abdomen is the last of an insect's three main body parts. Insects go through metamorphosis which is the change of an insect (or other animal) from one form to another as it develops into an adult. Some insects go through a three-stage life cycle called incomplete or gradual metamorphosis: egg-nymph-adult. Others go through a four-stage complete metamorphosis: egg-larva-pupa-adult.
i don't know
Chlorine, fluorine and bromine belong to which family of elements?
Bromine, Chemical Element - reaction, water, uses, elements, metal, gas, number, name PRONUNCIATION BRO-meen Nearly 90 percent of all bromine produced comes from the United States, Israel, or the United Kingdom. In 1996, about 450,000,000 kilograms (one billion pounds) of the element were produced worldwide. The largest single use of the element is in the manufacture of flame retardants. Flame retardants are chemicals added to materials to prevent burning or to keep them from burning out of control. Other major uses are in the manufacture of drilling fluids, pesticides, chemicals for the purification of water, photographic chemicals, and as an additive to rubber. Discovery and naming Compounds of bromine had been known for hundreds of years before the element was discovered. One of the most famous of these compounds was Tyrian purple, also called royal purple. (Tyrian comes from the word Tyre, an ancient Phoenician city.) Only very rich people or royalty could afford to buy fabric dyed with Tyrian purple. It was obtained from a mollusk (shell fish) found on the shores of the Mediterranean Sea (a large body of water bordered by Europe, Asia, and Africa). In 1825, Löwig enrolled at the University of Heidelberg in Germany to study chemistry. He continued an experiment he had begun at home in which he added chlorine to spring water. The addition of ether to that mixture produced a beautiful red color. Löwig suspected he had discovered a new kind substance. A professor encouraged him by suggesting he study the substance in more detail. As these studies progressed, Balard published a report in a chemical journal that announced the discovery of the new element bromine. The element had all the properties of Löwig's new substance. The two chemists had made the discovery at nearly the same time! Balard, however, is credited as the discoverer of bromine, because scientists acknowledge the first person to publish his or her findings. In Greek, the word bromos means "stench" (strong, offensive odor). Bromine lives up to the description. The odor is intense and highly irritating to the eyes and lungs. Chemists found that bromine belonged in the halogen family. They knew that it had properties similar to other halogens and placed it below fluorine and chlorine in the periodic table. Physical properties Only two liquid elements exist—bromine and mercury. At room temperature, bromine is a deep reddish-brown liquid. It evaporates easily, giving off strong fumes that irritate the throat and lungs. Bromine boils at 58.8°C (137.8°F), and its density is 3.1023 grams per cubic centimeter. Bromine freezes at -7.3°C (18.9°F). A laboratory vessel holds the solid, liquid, and gas states of bromine. Bromine dissolves well in organic liquids—such as ether, alcohol, and carbon tetrachloride—but only slightly in water. Organic compounds contain the element carbon. Chemical properties Bromine is a very reactive element. While it is less reactive than fluorine or chlorine, it is more reactive than iodine. It reacts with many metals, sometimes very vigorously. For instance, with potassium, it reacts explosively. Bromine even combines with relatively unreactive metals, such as platinum and palladium. Occurrence in nature Bromine is too reactive to exist as a free element in nature. Instead, it occurs in compounds, the most common of which are sodium bromide (NaBr) and potassium bromide (KBr). These compounds are found in seawater and underground salt beds. These salt beds were formed in regions where oceans once covered the land. When the oceans evaporated (dried up), salts were left behind—primarily sodium chloride (NaCl), potassium chloride (KCl), and sodium and potassium bromide. Later, movements of the Earth's crust buried the salt deposits. Now they are buried miles underground. The salts are brought to the surface in much the same way that coal is mined. Bromine is a moderately abundant element. Its abundance in the Earth's crust is estimated to be about 1.6 to 2.4 parts per million. It is far more abundant in seawater where it is estimated at about 65 parts per million. In some regions, the abundance of bromine is even higher. For example, the Dead Sea (which borders Israel and Jordan), has a high level of dissolved salts. The abundance of bromine there is estimated to be 4,000 parts per million. The salinity, or salt content, is so high that nothing lives in the water. This is why it is called the Dead Sea. Isotopes Two naturally existing isotopes of bromine exist, bromine-79 and bromine-81. Isotopes are two or more forms of an element. Isotopes differ from each other according to their mass number. The number written to the right of the element's name is the mass number. The mass number represents the number of protons plus neutrons in the nucleus of an atom of the element. The number of protons determines the element, but the number of neutrons in the atom of any one element can vary. Each variation is an isotope. At least 16 radioactive isotopes of bromine are known also. A radioactive isotope is one that breaks apart and gives off some form of radiation. Radioactive isotopes are produced when very small particles are fired at atoms. These particles stick in the atoms and make them radioactive. No isotope of bromine has any important commercial use. The salinity, or salt content, is so high that nothing lives in the water. This is why it is called the Dead Sea. Extraction The method used by Lowig and Balard to collect bromine continues to be used today. Chlorine is added to seawater containing sodium bromide or potassium bromide. Chlorine is more active than bromine and replaces bromine in the reaction: Uses The most important use of bromine today is in making flame retardant materials. Many materials used in making clothing, carpets, curtains, and drapes are flammable, and if a flame touches them, they burn very quickly. Chemists have learned how to make materials more resistant to fires by soaking them in a bromine compound. The compound coats the fibers of the material. The bromine compound can also be chemically incorporated into the material. The bromine compounds used in flame retardants are often complicated. One such compound is called tris(dibromopropyl)phosphate ((Br 2 C 3 H 5 O) 3 PO). However, this compound has been found to be a carcinogen (cancer-causing substance). Its use, therefore, has been severely restricted. About 20 percent of all bromine is used in drilling wells. Calcium bromide (CaBr 2 ), sodium bromide (NaBr), or zinc bromide (ZnBr 2 ) are added to the well to increase the efficiency of the drilling process. Bromine is also important in the manufacture of pesticides, chemicals used to kill pests. Methyl bromide (CH 3 Br) has been used for years to treat crop lands. Methyl bromide is sprayed on the surface or injected directly into the ground. Some methyl bromide always evaporates into the air where it damages the ozone layer. Ozone (O 3 ) gas filters out a portion of the ultraviolet (UV) radiation from the sun. UV radiation causes skin cancer, sunburn, and damage to plants and fragile organisms. Worldwide production of methyl bromide will end in 2001 because of its effect on the ozone layer. The United States plans to stop production of the compound even earlier. Farmers believe nothing works as well as methyl bromide in eliminating certain pests. They are concerned that crop production will suffer if methyl bromide is banned. The most important use of bromine today is in making flame retardant materials. Ethylene dibromide (C 2 H 4 Br 2 ) is a bromide compound added to leaded gasoline. The lead in "leaded gasoline" is tetraethyl lead (Pb(C 2 H 5 )) 4 ). It helps fuels burn more cleanly and keeps car engines from "knocking." "Knocking" is a repetitive metallic banging sound that occurs when there are ignition problems Bromine compounds increase the efficiency of the drilling process in oil rigs. with a car's engine. "Knocking" reduces the efficiency of a car engine. But leaded gasoline gives off free lead as it burns. Free lead is a very toxic element that causes damage to the nervous system. Ethylene dibromide is added to react with free lead and convert it to a safe compound. Ethylene dibromide does not completely solve the problem. Some free lead still escapes into the atmosphere. Leaded gasoline has been banned in the United States for many years but is still used in other countries. The most popular element for purifying public water supplies and swimming pools used to be chlorine. Bromine compounds have become more popular for their superior bacteria killing power. Health effects Bromine is toxic if inhaled or swallowed. It can damage the respiratory system and the digestive system, and can even cause death. It can also cause damage if spilled on the skin. Also read article about Bromine from Wikipedia User Contributions:
Halogen
Which was the first antibiotic to be discovered?
Metals and Halogens Reactions Essay - 1380 Words Halogen Reactions Essay ...Purpose One of the properties explored in this experiment will be electronegativity by the use of halides and halogens. Electronegativity is the ability of bonded atoms to attract electrons toward it. Other concepts that will be illustrated during this experiment will be the reactions of halogens are polarity of bonds and of molecules, and solubility. A molecules solubility is dependent on its polarity. Thus, the concept of electronegativity allows... 979  Words | 4  Pages The Halogens Essay ...* The halogens can be found on the left-hand side of the noble gases. * These five toxic, non-metallic elements make up Group 17 of the periodic table and consist of: fluorine (F), chlorine (Cl), bromine (Br), iodine (I), and astatine (At). * Although astatine is radioactive and only has short-lived isotopes, it behaves similar to iodine and is often included in the halogen group. * Since the halogen elements have seven valence electrons,... 257  Words | 1  Pages Essay about Metals and Non-metals Characteristics ... Metals & Non-Metals  Metals: Good conductors of heat and electricity. Have shining luster. Malleable (this means that they can be hammered or distorted). Ductile (this means that they can be drawn into wires). Most have high melting and boiling points. Are sonorous (give out sound when beaten). Usually solid at room temperature. An exception to this is mercury, which is liquid in nature. Examples: Aluminum, Gold, Copper, Silver,... 556  Words | 2  Pages Formal Lab Report Rates of Reaction Alkali Metals and Alkaline Earth Metals Essay ...AbstractThe rates of reaction of Alkali metals and Alkaline Earth meatals are compared in this lab. The pH of each of the resulting metal solutions are tested and the products of the reaction between calcium and water is discovered. The tested elements are sodium, lithium, potassium and calcium and each of them were placed in a beaker filled with water. The resulting solutions pH levels were tested with litmus paper. There were more... 1638  Words | 6  Pages Metals Essay ...Chemistry - Module 2 - Metal 1. Metals have been extracted and used for many thousands of years * Outline and examine some uses of different metals through history, including contemporary uses, as uncombined metals or as alloys. Contemporary Uses of common metals Metal | Uses | Iron and Steel (an alloy with <2% carbon)Good tensile strength, cheap, rusts (corrodes) | - Railways, bridges,... 12809  Words | 44  Pages Metals Essay ...Metals Physical Properties of Metals versus Non-metals Properties | Electrical conductivity | Heat conductivity | Melting and Boiling points | Malleability & ductility | Lustre | Metals | Good | Good | High | High | Shiny | Non-metals | Poor | Poor | Low | Low (Brittle) | Dull | Chemical Properties of Metals versus Non-Metals Properties | Metals | Non-... 801  Words | 4  Pages Reaction Essay ...Economy - overviewzz Philippine GDP grew 7.6% in 2010, spurred by consumer demand, a rebound in exports and investments, and election-related spending, before cooling to 3.7% in 2011. The economy weathered the 2008-09 global recession better than its regional peers due to minimal exposure to troubled international securities, lower dependence on exports, relatively resilient domestic consumption, large remittances from four- to five-million overseas Filipino workers, and a growing business... 918  Words | 5  Pages reaction Essay ...Communist Party of the Philippines (CPP), and the Muslim separatist movement of the Moro National Liberation Front (MNLF). One of his first actions was to arrest opposition politicians in Congress and the Constitutional Convention. Initial public reaction to martial law was mostly favourable except in Muslim areas of the south, where a separatist rebellion, led by the MNLF, broke out in 1973. Despite half-hearted attempts to negotiate a cease-fire, the rebellion continued to... 768  Words | 3  Pages
i don't know
What is the boiling point of water?
What Is the Boiling Point of Water? What Is the Boiling Point of Water? What Is the Boiling Point of Water? Boiling Point of Water The boiling point of water is 100 degrees Celsius or 212 degrees Fahrenheit at 1 atmosphere of pressure (sea level).  Jody Dole, Getty Images Updated July 21, 2016. Question: What Temperature Does Water Boil? At what temperature does water boil? What determines the boiling point of water? Here's the answer to this common question. Answer: The boiling point of water is 100°C or 212° F at 1 atmosphere of pressure (sea level). However, the value is not a constant. The boiling point of water depends on the atmospheric pressure, which changes according to elevation. The boiling point of water is 100°C or 212° F at 1 atmosphere of pressure (sea level), but water boils at a lower temperature as you gain altitude (e.g., on a mountain) and boils at a higher temperature if you increase atmospheric pressure (lived below sea level ). The boiling point of water also depends on the purity of the water. Water which contains impurities (such as salted water ) boils at a higher temperature than pure water. This phenomenon is called boiling point elevation , which is one of the colligative properties of matter. Learn More
100 degrees celsius
Ascorbic acid is which vitamin?
Q & A: Boiling and Freezing points of pure and salty water | Department of Physics | University of Illinois at Urbana-Champaign University of Illinois at Urbana-Champaign Q & A: Boiling and Freezing points of pure and salty water Learn more physics! What is the boiling and freezing point of both fresh and saltwater? - maria (age 13) congress middle, lake worth , F.L., Florida A: Hi Maria, For pure water, the boiling point is 100 degrees Celsius (212 Fahrenheit) at one atmosphere of pressure, and the melting point is 0 degrees Celsius (32 degrees Fahrenheit) at one atmosphere of pressure. At lower pressures (at high altitudes, for example, in Denver, Colorado), the boiling point will be perhaps a couple of degrees lower. For saltwater, the boiling point is raised, and the melting point is lowered. By how much depends on how much salt there is. I'll assume the salt is sodium chloride, NaCl (table salt). The melting point is lowered by 1.85 degrees Celsius if 29.2 grams of salt are dissolved in each Kg of water (called a "0.5 molal solution" of salt. The Na and Cl dissociate right away when dissolved, and so for a 0.5 molal solution of salt, there is a 1.0 molal concentration of ions). The boiling point is raised by 0.5 degrees Celsius for water with 29.2 grams of salt dissolved in each kg of water. If your concentrations of salt are different, then you can scale the boiling point elevation and melting point depression predictions directly with the concentration. These numbers come from the CRC Handbook of Chemistry and Physics. Tom Naestved, denmark A: You're basically right. Either way you start out with plain water and salt at room temperature. The noodles go in when you have salty water at its boiling point. The energy difference between that and the starting point doesn't depend on when you add the salt. A little energy is lost to evaporation on the way, however. If you wait until the water is boiling, then add salt, then boil again, the water is spending a little more time near boiling than it would if you add the salt first. So you lose a little more energy. It's marginally more efficient to add the salt first. Mike W. Overland Park KS, USA A: I use that sort of humidifier. Adding salt does work. The heating comes from the electrical current flowing through the water. Tap water has a pretty low conductivity, so not much current flows. Adding salt raises the conductivity, since the ions are electrically charged. You actually have to be a bit careful not to add too much salt, since you don't want to blow a fuse.  The effects on the boiling point are very minor compared to the effects on the conductivity.  BTW, although these humidifiers are cheap they do have the nice advantage that since they output water vapor, not drops, you don's have to worry about bacteria etc. getting sprayed into the air. Mike W.
i don't know
What is the generic term for the mechanical, electrical and electronic components of a computer?
Electro-mechanical Technicians : Occupational Outlook Handbook: : U.S. Bureau of Labor Statistics U.S. Bureau of Labor Statistics Summary Electro-mechanical technicians verify dimensions of parts, by using precision measuring instruments, to ensure that specifications are met. Quick Facts: Electro-mechanical Technicians What Electro-mechanical Technicians Do About this section Electro-mechanical technicians install, repair, upgrade, and test electronic and computer-controlled mechanical systems. Electro-mechanical technicians combine knowledge of mechanical technology with knowledge of electrical and electronic circuits. They operate, test, and maintain unmanned, automated, robotic, or electromechanical equipment. Duties Electro-mechanical technicians typically do the following: Read blueprints, schematics, and diagrams to determine the method and sequence of assembly of a part, machine, or piece of equipment Verify dimensions of parts, using precision measuring instruments, to ensure that specifications are met Operate metalworking machines to make housings, fittings, and fixtures Inspect parts for surface defects Repair and calibrate hydraulic and pneumatic assemblies  Test the performance of electro-mechanical assemblies, using test instruments Install electronic parts and hardware, using soldering equipment and hand tools Operate, test, or maintain robotic equipment Analyze and record test results, and prepare written documentation Electro-mechanical technicians test and operate machines in factories and other worksites. They also analyze and record test results, and prepare written documentation to describe the tests they did and what the test results were. Electro-mechanical technicians install, maintain, and repair automated machinery and equipment in industrial settings. This kind of work requires knowledge and training in the application of photonics, the science of light. The technological aspects of the work have to do with the generating, controlling, and detecting of the light waves so that the automated processes can proceed as designed by the engineers. Electro-mechanical technicians also test, operate, or maintain robotic equipment at worksites. This equipment may include unmanned submarines, aircraft, or similar types of equipment for uses including oil drilling, deep-ocean exploration, or hazardous-waste removal. Work Environment About this section Electro-mechanical technicians test the performance of electro-mechanical assemblies, using test instruments. Electro-mechanical technicians held about 14,700 jobs in 2014. The industries that employed the most electro-mechanical technicians were as follows: Navigational, measuring, electromedical, and control instruments manufacturing 13% Machinery manufacturing 7 Electro-mechanical technicians work closely with electrical and mechanical engineers . They work in many industrial environments, including energy, plastics, computer, and communications equipment manufacturing, and aerospace. They often work both at production sites and in offices. Because their job involves manual work with many machines and types of equipment, electro-mechanical technicians are sometimes exposed to hazards from equipment or toxic materials. However, incidents are rare as long as they follow proper safety procedures. Work Schedules Electro-mechanical technicians often work for larger companies in manufacturing or for engineering firms. Like others at these firms, these technicians tend to work regular shifts. However, sometimes they must work longer hours to make repairs so that manufacturing operations can continue. How to Become an Electro-mechanical Technician About this section Electro-mechanical technicians typically need either an associate’s degree or a postsecondary certificate. Electro-mechanical technicians typically need either an associate’s degree or a postsecondary certificate. Education Associate’s degree programs and postsecondary certificates for electro-mechanical technicians are offered at vocational–technical schools and community colleges. Vocational–technical schools include postsecondary public institutions that serve local students and emphasize teaching the skills needed by local employers. Community colleges offer programs similar to those in technical institutes, but they may include more theory-based and liberal arts coursework. ABET accredits associate’s and higher degree programs. Most associate’s degree programs that are accredited by ABET include at least college algebra and trigonometry, as well as basic science courses. ABET-accredited programs offer training in engineering technology specialties. In community college programs, prospective electro-mechanical technicians can concentrate in fields such as the following: Electro-mechanics Computer-integrated manufacturing Mechatronics Earning an associate’s degree in electronic or mechanical technology facilitates entry into a bachelor’s degree programs in electrical engineering and mechanical engineering. For more information, see the profiles on electrical and electronics engineers and mechanical engineers . Training in mechatronics provides an understanding of four key systems on which this occupation works: mechanical systems, electronic systems, control systems, and computer systems. Important Qualities Detail oriented. Electro-mechanical technicians must make and keep the precise, accurate measurements that mechanical engineers need. Dexterity. Electro-mechanical technicians must be able to use hand tools and soldering irons on small circuitry and electronic parts to create detailed electronic components by hand. Interpersonal skills. Electro-mechanical technicians must be able to take instruction and offer advice when needed. In addition, they often need to coordinate their work with that of others. Logical-thinking skills. To carry out engineers’ designs, inspect designs for quality control, and assemble prototypes, electro-mechanical technicians must be able to read instructions and follow a logical sequence or a specific set of rules. Math skills. Electro-mechanical technicians use mathematics for analysis, design, and troubleshooting in their work. Mechanical skills. Electro-mechanical technicians must be able to apply the theory and instructions of engineers by creating or building new components for industrial machinery or equipment. They must be adept at operating machinery, including drill presses, grinders, and engine lathes. Writing skills. Electro-mechanical technicians must write reports that cover onsite construction, the results of testing, or problems they find when carrying out designs. Their writing must be clear and well-organized so that the engineers they work with can understand the reports. Licenses, Certifications, and Registrations Electro-mechanical technicians can gain certification as a way to demonstrate professional competence. The International Society of Automation offers certification as a Certified Control Systems Technician. This requires, at a minimum, 5 years of experience on the job, or 3 years of work experience if the technician has completed 2 years of postsecondary education. Note: All Occupations includes all occupations in the U.S. Economy. Source: U.S. Bureau of Labor Statistics, Employment Projections program Employment of electro-mechanical technicians is projected to show little or no change from 2014 to 2024. Many of these technicians are employed in manufacturing industries that are projected to experience employment declines.  Electro-mechanical technicians are generalists in technology, and their broad skill set will help sustain employment. This is especially the case as their skills working with machines wired to computer control systems grow in importance in the manufacturing sector. There should be demand for electro-mechanical technicians as demand increases for engineers to design and build new equipment in various fields. Consequently, employers will likely seek out electro-mechanical technicians with knowledge of photonics to help implement and maintain automated processes. Increasing adoption of renewable energies, such as solar power and wind turbines, may also contribute to increased demand for electro-mechanical technicians. Employment projections data for electro-mechanical technicians, 2014-24 Occupational Title State & Area Data About this section Occupational Employment Statistics (OES) The Occupational Employment Statistics (OES) program produces employment and wage estimates annually for over 800 occupations. These estimates are available for the nation as a whole, for individual states, and for metropolitan and nonmetropolitan areas. The link(s) below go to OES data maps for employment and wages by state and area. Electro-mechanical technicians Projections Central Occupational employment projections are developed for all states by Labor Market Information (LMI) or individual state Employment Projections offices. All state projections data are available at www.projectionscentral.com . Information on this site allows projected employment growth for an occupation to be compared among states or to be compared within one state. In addition, states may produce projections for areas; there are links to each state’s websites where these data may be retrieved. Career InfoNet America’s Career InfoNet includes hundreds of occupational profiles with data available by state and metro area. There are links in the left-hand side menu to compare occupational employment by state and occupational wages by local area or metro area. There is also a salary info tool to search for wages by zip code.
Hardware
Whose research on X-ray diffraction of ?DNA crystals helped Crick and Watson during the race to discover the structure of DNA?
Hardware | Define Hardware at Dictionary.com hardware metalware, as tools, locks, hinges, or cutlery. 2. the mechanical equipment necessary for conducting an activity, usually distinguished from the theory and design that make the activity possible. 3. military weapons and combat equipment. 4. Slang. a weapon carried on one's person: The rougher types were asked to check their hardware at the door. 5. Computers. the mechanical, magnetic, electronic, and electrical devices comprising a computer system, as the CPU, disk drives, keyboard, or screen. Expand 1505-15; 1955-60 for def 5; hard + ware 1 Dictionary.com Unabridged Examples from the Web for hardware Expand Contemporary Examples When the blade broke, instead of calling it a day, she drove to the hardware store and bought a new one. Commercial Geography Jacques W. Redway The hardware man seemed at that moment to Mr Twitter the hardest-ware man that ever confronted him. Scattergood Baines Clarence Budington Kelland British Dictionary definitions for hardware Expand metal tools, implements, etc, esp cutlery or cooking utensils 2. (computing) the physical equipment used in a computer system, such as the central processing unit, peripheral devices, and memory Compare software 3. heavy military equipment, such as tanks and missiles or their parts 5. (informal) a gun or guns collectively Collins English Dictionary - Complete & Unabridged 2012 Digital Edition © William Collins Sons & Co. Ltd. 1979, 1986 © HarperCollins Publishers 1998, 2000, 2003, 2005, 2006, 2007, 2009, 2012 Word Origin and History for hardware Expand n. mid-15c., "small metal goods," from hard + ware (n.). In the sense of "physical components of a computer" it dates from 1947. Hardware store attested by 1789. Online Etymology Dictionary, © 2010 Douglas Harper hardware   (härd'wâr')     A computer, its components, and its related equipment. Hardware includes disk drives, integrated circuits, display screens, cables, modems, speakers, and printers. Compare software . The American Heritage® Science Dictionary Copyright © 2002. Published by Houghton Mifflin. All rights reserved. Expand hardware definition The physical machinery and devices that make up a computer system. It is contrasted to software — the programs and instructions used to run the system. The American Heritage® New Dictionary of Cultural Literacy, Third Edition Copyright © 2005 by Houghton Mifflin Company. Published by Houghton Mifflin Company. All rights reserved. Slang definitions & phrases for hardware Expand Weapons and other war mate´riel: military ''hardware,'' tanks, planes, guns, rockets, weapons (1865+) Military insignia or medals worn on a uniform (WWII armed forces) Badges and other identification jewelry (1930s+) The Dictionary of American Slang, Fourth Edition by Barbara Ann Kipfer, PhD. and Robert L. Chapman, Ph.D. Copyright (C) 2007 by HarperCollins Publishers. Expand hardware The physical, touchable, material parts of a computer or other system. The term is used to distinguish these fixed parts of a system from the more changable software or data components which it executes, stores, or carries. Computer hardware typically consists chiefly of electronic devices ( CPU , memory , display ) with some electromechanical parts (keyboard, printer , disk drives , tape drives , loudspeakers) for input, output, and storage, though completely non-electronic (mechanical, electromechanical, hydraulic, biological) computers have also been conceived of and built.
i don't know
Heisenberg is most associated with which branch of physics?
The Uncertainty Principle (Stanford Encyclopedia of Philosophy) Stanford Encyclopedia of Philosophy The Uncertainty Principle First published Mon Oct 8, 2001; substantive revision Tue Jul 12, 2016 Quantum mechanics is generally regarded as the physical theory that is our best candidate for a fundamental and universal description of the physical world. The conceptual framework employed by this theory differs drastically from that of classical physics. Indeed, the transition from classical to quantum physics marks a genuine revolution in our understanding of the physical world. One striking aspect of the difference between classical and quantum physics is that whereas classical mechanics presupposes that exact simultaneous values can be assigned to all physical quantities, quantum mechanics denies this possibility, the prime example being the position and momentum of a particle. According to quantum mechanics, the more precisely the position (momentum) of a particle is given, the less precisely can one say what its momentum (position) is. This is (a simplistic and preliminary formulation of) the quantum mechanical uncertainty principle for position and momentum. The uncertainty principle played an important role in many discussions on the philosophical implications of quantum mechanics, in particular in discussions on the consistency of the so-called Copenhagen interpretation, the interpretation endorsed by the founding fathers Heisenberg and Bohr. This should not suggest that the uncertainty principle is the only aspect of the conceptual difference between classical and quantum physics: the implications of quantum mechanics for notions as (non)-locality, entanglement and identity play no less havoc with classical intuitions. Related Entries 1. Introduction The uncertainty principle is certainly one of the most famous aspects of quantum mechanics. It has often been regarded as the most distinctive feature in which quantum mechanics differs from classical theories of the physical world. Roughly speaking, the uncertainty principle (for position and momentum) states that one cannot assign exact simultaneous values to the position and momentum of a physical system. Rather, these quantities can only be determined with some characteristic “uncertainties” that cannot become arbitrarily small simultaneously. But what is the exact meaning of this principle, and indeed, is it really a principle of quantum mechanics? (In his original work, Heisenberg only speaks of uncertainty relations.) And, in particular, what does it mean to say that a quantity is determined only up to some uncertainty? These are the main questions we will explore in the following, focusing on the views of Heisenberg and Bohr. The notion of “uncertainty” occurs in several different meanings in the physical literature. It may refer to a lack of knowledge of a quantity by an observer, or to the experimental inaccuracy with which a quantity is measured, or to some ambiguity in the definition of a quantity, or to a statistical spread in an ensemble of similarly prepared systems. Also, several different names are used for such uncertainties: inaccuracy, spread, imprecision, indefiniteness, indeterminateness, indeterminacy, latitude, etc. As we shall see, even Heisenberg and Bohr did not decide on a single terminology for quantum mechanical uncertainties. Forestalling a discussion about which name is the most appropriate one in quantum mechanics, we use the name “uncertainty principle” simply because it is the most common one in the literature. 2. Heisenberg 2.1 Heisenberg’s road to the uncertainty relations Heisenberg introduced his famous relations in an article of 1927, entitled Ueber den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. A (partial) translation of this title is: “On the anschaulich content of quantum theoretical kinematics and mechanics”. Here, the term anschaulich is particularly notable. Apparently, it is one of those German words that defy an unambiguous translation into other languages. Heisenberg’s title is translated as “On the physical content …” by Wheeler and Zurek (1983). His collected works (Heisenberg 1984) translate it as “On the perceptible content …”, while Cassidy’s biography of Heisenberg (Cassidy 1992), refers to the paper as “On the perceptual content …”. Literally, the closest translation of the term anschaulich is “visualizable”. But, as in most languages, words that make reference to vision are not always intended literally. Seeing is widely used as a metaphor for understanding, especially for immediate understanding. Hence, anschaulich also means “intelligible” or “intuitive”.[ 1 ] Why was this issue of the Anschaulichkeit of quantum mechanics such a prominent concern to Heisenberg? This question has already been considered by a number of commentators (Jammer 1974; Miller 1982; de Regt 1997; Beller 1999). For the answer, it turns out, we must go back a little in time. In 1925 Heisenberg had developed the first coherent mathematical formalism for quantum theory (Heisenberg 1925). His leading idea was that only those quantities that are in principle observable should play a role in the theory, and that all attempts to form a picture of what goes on inside the atom should be avoided. In atomic physics the observational data were obtained from spectroscopy and associated with atomic transitions. Thus, Heisenberg was led to consider the “transition quantities” as the basic ingredients of the theory. Max Born, later that year, realized that the transition quantities obeyed the rules of matrix calculus, a branch of mathematics that was not so well-known then as it is now. In a famous series of papers Heisenberg, Born and Jordan developed this idea into the matrix mechanics version of quantum theory. Formally, matrix mechanics remains close to classical mechanics. The central idea is that all physical quantities must be represented by infinite self-adjoint matrices (later identified with operators on a Hilbert space). It is postulated that the matrices \(\bQ\) and \(\bP\) representing the canonical position and momentum variables of a particle satisfy the so-called canonical commutation rule \[\tag{1} \bQ\bP - \bP\bQ = i\hslash\] where \(\hslash = h/2\pi\), \(h\) denotes Planck’s constant, and boldface type is used to represent matrices (or operators). The new theory scored spectacular empirical success by encompassing nearly all spectroscopic data known at the time, especially after the concept of the electron spin was included in the theoretical framework. It came as a big surprise, therefore, when one year later, Erwin Schrödinger presented an alternative theory, that became known as wave mechanics. Schrödinger assumed that an electron in an atom could be represented as an oscillating charge cloud, evolving continuously in space and time according to a wave equation. The discrete frequencies in the atomic spectra were not due to discontinuous transitions (quantum jumps) as in matrix mechanics, but to a resonance phenomenon. Schrödinger also showed that the two theories were equivalent.[ 2 ] Even so, the two approaches differed greatly in interpretation and spirit. Whereas Heisenberg eschewed the use of visualizable pictures, and accepted discontinuous transitions as a primitive notion, Schrödinger claimed as an advantage of his theory that it was anschaulich. In Schrödinger’s vocabulary, this meant that the theory represented the observational data by means of continuously evolving causal processes in space and time. He considered this condition of Anschaulichkeit to be an essential requirement on any acceptable physical theory. Schrödinger was not alone in appreciating this aspect of his theory. Many other leading physicists were attracted to wave mechanics for the same reason. For a while, in 1926, before it emerged that wave mechanics had serious problems of its own, Schrödinger’s approach seemed to gather more support in the physics community than matrix mechanics. Understandably, Heisenberg was unhappy about this development. In a letter of 8 June 1926 to Pauli he confessed that “The more I think about the physical part of Schrödinger’s theory, the more disgusting I find it”, and: “What Schrödinger writes about the Anschaulichkeit of his theory, … I consider Mist” (Pauli 1979: 328). Again, this last German term is translated differently by various commentators: as “junk” (Miller 1982) “rubbish” (Beller 1999) “crap” (Cassidy 1992), “poppycock” (Bacciagaluppi & Valentini 2009) and perhaps more literally, as “bullshit” (Moore 1989; de Regt 1997). Nevertheless, in published writings, Heisenberg voiced a more balanced opinion. In a paper in Die Naturwissenschaften (1926) he summarized the peculiar situation that the simultaneous development of two competing theories had brought about. Although he argued that Schrödinger’s interpretation was untenable, he admitted that matrix mechanics did not provide the Anschaulichkeit which made wave mechanics so attractive. He concluded: to obtain a contradiction-free anschaulich interpretation, we still lack some essential feature in our image of the structure of matter. The purpose of his 1927 paper was to provide exactly this lacking feature. 2.2 Heisenberg’s argument Let us now look at the argument that led Heisenberg to his uncertainty relations. He started by redefining the notion of Anschaulichkeit. Whereas Schrödinger associated this term with the provision of a causal space-time picture of the phenomena, Heisenberg, by contrast, declared: We believe we have gained anschaulich understanding of a physical theory, if in all simple cases, we can grasp the experimental consequences qualitatively and see that the theory does not lead to any contradictions. Heisenberg 1927: 172) His goal was, of course, to show that, in this new sense of the word, matrix mechanics could lay the same claim to Anschaulichkeit as wave mechanics. To do this, he adopted an operational assumption: terms like “the position of a particle” have meaning only if one specifies a suitable experiment by which “the position of a particle” can be measured. We will call this assumption the “measurement=meaning principle”. In general, there is no lack of such experiments, even in the domain of atomic physics. However, experiments are never completely accurate. We should be prepared to accept, therefore, that in general the meaning of these quantities is also determined only up to some characteristic inaccuracy. As an example, he considered the measurement of the position of an electron by a microscope. The accuracy of such a measurement is limited by the wave length of the light illuminating the electron. Thus, it is possible, in principle, to make such a position measurement as accurate as one wishes, by using light of a very short wave length, e.g., \(\gamma\)-rays. But for \(\gamma\)-rays, the Compton effect cannot be ignored: the interaction of the electron and the illuminating light should then be considered as a collision of at least one photon with the electron. In such a collision, the electron suffers a recoil which disturbs its momentum. Moreover, the shorter the wave length, the larger is this change in momentum. Thus, at the moment when the position of the particle is accurately known, Heisenberg argued, its momentum cannot be accurately known: At the instant of time when the position is determined, that is, at the instant when the photon is scattered by the electron, the electron undergoes a discontinuous change in momentum. This change is the greater the smaller the wavelength of the light employed, i.e., the more exact the determination of the position. At the instant at which the position of the electron is known, its momentum therefore can be known only up to magnitudes which correspond to that discontinuous change; thus, the more precisely the position is determined, the less precisely the momentum is known, and conversely. (Heisenberg 1927: 174–5) This is the first formulation of the uncertainty principle. In its present form it is an epistemological principle, since it limits what we can know about the electron. From “elementary formulae of the Compton effect” Heisenberg estimated the “imprecisions” to be of the order \[\tag{2} \delta p\delta q \sim h\] He continued: “In this circumstance we see the direct anschaulich content of the relation \(\boldsymbol{QP} - \boldsymbol{PQ} = i\hslash\).” He went on to consider other experiments, designed to measure other physical quantities and obtained analogous relations for time and energy: \[\tag{3} \delta t \delta E \sim h\] and action \(J\) and angle \(w\) \[\tag{4} \delta w \delta J \sim h\] which he saw as corresponding to the “well-known” relations \[\tag{5} \boldsymbol{tE} - \boldsymbol{Et} = i\hslash \text{ or } \boldsymbol{wJ} - \boldsymbol{Jw} = i\hslash\] However, these generalisations are not as straightforward as Heisenberg suggested. In particular, the status of the time variable in his several illustrations of relation (3) is not at all clear (Hilgevoord 2005; see also Section 2.5 ). Heisenberg summarized his findings in a general conclusion: all concepts used in classical mechanics are also well-defined in the realm of atomic processes. But, as a pure fact of experience (rein erfahrungsgemäß), experiments that serve to provide such a definition for one quantity are subject to particular indeterminacies, obeying relations (2) – (4) which prohibit them from providing a simultaneous definition of two canonically conjugate quantities. Note that in this formulation the emphasis has slightly shifted: he now speaks of a limit on the definition of concepts, i.e., not merely on what we can know, but what we can meaningfully say about a particle. Of course, this stronger formulation follows by application of the above measurement=meaning principle: if there are, as Heisenberg claims, no experiments that allow a simultaneous precise measurement of two conjugate quantities, then these quantities are also not simultaneously well-defined. Heisenberg’s paper has an interesting “Addition in proof” mentioning critical remarks by Bohr, who saw the paper only after it had been sent to the publisher. Among other things, Bohr pointed out that in the microscope experiment it is not the change of the momentum of the electron that is important, but rather the circumstance that this change cannot be precisely determined in the same experiment. An improved version of the argument, responding to this objection, is given in Heisenberg’s Chicago lectures of 1930. Here (Heisenberg 1930: 16), it is assumed that the electron is illuminated by light of wavelength \(\lambda\) and that the scattered light enters a microscope with aperture angle \(\varepsilon\). According to the laws of classical optics, the accuracy of the microscope depends on both the wave length and the aperture angle; Abbe’s criterium for its “resolving power”, i.e., the size of the smallest discernable details, gives \[\tag{6} \delta q \sim \frac{\lambda}{\sin \varepsilon}.\] On the other hand, the direction of a scattered photon, when it enters the microscope, is unknown within the angle \(\varepsilon\), rendering the momentum change of the electron uncertain by an amount \[\tag{7} \delta p \sim \frac{h \sin \varepsilon}{\lambda}\] leading again to the result (2) . Let us now analyse Heisenberg’s argument in more detail. Note that, even in this improved version, Heisenberg’s argument is incomplete. According to Heisenberg’s “measurement=meaning principle”, one must also specify, in the given context, what the meaning is of the phrase “momentum of the electron”, in order to make sense of the claim that this momentum is changed by the position measurement. A solution to this problem can again be found in the Chicago lectures (Heisenberg 1930: 15). Here, he assumes that initially the momentum of the electron is precisely known, e.g., it has been measured in a previous experiment with an inaccuracy \(\delta p_{i}\), which may be arbitrarily small. Then, its position is measured with inaccuracy \(\delta q\), and after this, its final momentum is measured with an inaccuracy \(\delta p_{f}\). All three measurements can be performed with arbitrary precision. Thus, the three quantities \(\delta p_{i}, \delta q\), and \(\delta p_{f}\) can be made as small as one wishes. If we assume further that the initial momentum has not changed until the position measurement, we can speak of a definite momentum until the time of the position measurement. Moreover we can give operational meaning to the idea that the momentum is changed during the position measurement: the outcome of the second momentum measurement (say \(p_{f}\) will generally differ from the initial value \(p_{i}\). In fact, one can also show that this change is discontinuous, by varying the time between the three measurements. Let us try to see, adopting this more elaborate set-up, if we can complete Heisenberg’s argument. We have now been able to give empirical meaning to the “change of momentum” of the electron, \(p_{f} - p_{i}\). Heisenberg’s argument claims that the order of magnitude of this change is at least inversely proportional to the inaccuracy of the position measurement: \[\tag{8} \abs{p_{f} - p_{i}} \delta q \sim h\] However, can we now draw the conclusion that the momentum is only imprecisely defined? Certainly not. Before the position measurement, its value was \(p_{i}\), after the measurement it is \(p_{f}\). One might, perhaps, claim that the value at the very instant of the position measurement is not yet defined, but we could simply settle this by a convention, e.g., we might assign the mean value \((p_{i} + p_{f})/2\) to the momentum at this instant. But then, the momentum is precisely determined at all instants, and Heisenberg’s formulation of the uncertainty principle no longer follows. The above attempt of completing Heisenberg’s argument thus overshoots its mark. A solution to this problem can again be found in the Chicago Lectures. Heisenberg admits that position and momentum can be known exactly. He writes: If the velocity of the electron is at first known, and the position then exactly measured, the position of the electron for times previous to the position measurement may be calculated. For these past times, \(\delta p\delta q\) is smaller than the usual bound. (Heisenberg 1930: 15) Indeed, Heisenberg says: “the uncertainty relation does not hold for the past”. Apparently, when Heisenberg refers to the uncertainty or imprecision of a quantity, he means that the value of this quantity cannot be given beforehand. In the sequence of measurements we have considered above, the uncertainty in the momentum after the measurement of position has occurred, refers to the idea that the value of the momentum is not fixed just before the final momentum measurement takes place. Once this measurement is performed, and reveals a value \(p_{f}\), the uncertainty relation no longer holds; these values then belong to the past. Clearly, then, Heisenberg is concerned with unpredictability: the point is not that the momentum of a particle changes, due to a position measurement, but rather that it changes by an unpredictable amount. It is, however always possible to measure, and hence define, the size of this change in a subsequent measurement of the final momentum with arbitrary precision. Although Heisenberg admits that we can consistently attribute values of momentum and position to an electron in the past, he sees little merit in such talk. He points out that these values can never be used as initial conditions in a prediction about the future behavior of the electron, or subjected to experimental verification. Whether or not we grant them physical reality is, as he puts it, a matter of personal taste. Heisenberg’s own taste is, of course, to deny their physical reality. For example, he writes, I believe that one can formulate the emergence of the classical “path” of a particle succinctly as follows: the “path” comes into being only because we observe it. (Heisenberg 1927: 185) Apparently, in his view, a measurement does not only serve to give meaning to a quantity, it creates a particular value for this quantity. This may be called the “measurement=creation” principle. It is an ontological principle, for it states what is physically real. This then leads to the following picture. First we measure the momentum of the electron very accurately. By “measurement= meaning”, this entails that the term “the momentum of the particle” is now well-defined. Moreover, by the “measurement=creation” principle, we may say that this momentum is physically real. Next, the position is measured with inaccuracy \(\delta q\). At this instant, the position of the particle becomes well-defined and, again, one can regard this as a physically real attribute of the particle. However, the momentum has now changed by an amount that is unpredictable by an order of magnitude \(\abs{p_{f} - p_{i}} \sim h/\delta q\). The meaning and validity of this claim can be verified by a subsequent momentum measurement. The question is then what status we shall assign to the momentum of the electron just before its final measurement. Is it real? According to Heisenberg it is not. Before the final measurement, the best we can attribute to the electron is some unsharp, or fuzzy momentum. These terms are meant here in an ontological sense, characterizing a real attribute of the electron. 2.3 The interpretation of Heisenberg’s uncertainty relations Heisenberg’s relations were soon considered to be a cornerstone of the Copenhagen interpretation of quantum mechanics. Just a few months later, Kennard (1927) already called them the “essential core” of the new theory. Taken together with Heisenberg’s contention that they provide the intuitive content of the theory and their prominent role in later discussions on the Copenhagen interpretation, a dominant view emerged in which the uncertainty relations were regarded as a fundamental principle of the theory. The interpretation of these relations has often been debated. Do Heisenberg’s relations express restrictions on the experiments we can perform on quantum systems, and, therefore, restrictions on the information we can gather about such systems; or do they express restrictions on the meaning of the concepts we use to describe quantum systems? Or else, are they restrictions of an ontological nature, i.e., do they assert that a quantum system simply does not possess a definite value for its position and momentum at the same time? The difference between these interpretations is partly reflected in the various names by which the relations are known, e.g., as “inaccuracy relations”, or: “uncertainty”, “indeterminacy” or “unsharpness relations”. The debate between these views has been addressed by many authors, but it has never been settled completely. Let it suffice here to make only two general observations. First, it is clear that in Heisenberg’s own view all the above questions stand or fall together. Indeed, we have seen that he adopted an operational “measurement=meaning” principle according to which the meaningfulness of a physical quantity was equivalent to the existence of an experiment purporting to measure that quantity. Similarly, his “measurement=creation” principle allowed him to attribute physical reality to such quantities. Hence, Heisenberg’s discussions moved rather freely and quickly from talk about experimental inaccuracies to epistemological or ontological issues and back again. However, ontological questions seemed to be of somewhat less interest to him. For example, there is a passage (Heisenberg 1927: 197), where he discusses the idea that, behind our observational data, there might still exist a hidden reality in which quantum systems have definite values for position and momentum, unaffected by the uncertainty relations. He emphatically dismisses this conception as an unfruitful and meaningless speculation, because, as he says, the aim of physics is only to describe observable data. Similarly, in the Chicago Lectures, he warns against the fact that the human language permits the utterance of statements which have no empirical content, but nevertheless produce a picture in our imagination. He notes, One should be especially careful in using the words “reality”, “actually”, etc., since these words very often lead to statements of the type just mentioned. (Heisenberg 1930: 11) So, Heisenberg also endorsed an interpretation of his relations as rejecting a reality in which particles have simultaneous definite values for position and momentum. The second observation is that although for Heisenberg experimental, informational, epistemological and ontological formulations of his relations were, so to say, just different sides of the same coin, this is not so for those who do not share his operational principles or his view on the task of physics. Alternative points of view, in which e.g., the ontological reading of the uncertainty relations is denied, are therefore still viable. The statement, often found in the literature of the thirties, that Heisenberg had proved the impossibility of associating a definite position and momentum to a particle is certainly wrong. But the precise meaning one can coherently attach to Heisenberg’s relations depends rather heavily on the interpretation one favors for quantum mechanics as a whole. And because no agreement has been reached on this latter issue, one cannot expect agreement on the meaning of the uncertainty relations either. 2.4 Uncertainty relations or uncertainty principle? Let us now move to another question about Heisenberg’s relations: do they express a principle of quantum theory? Probably the first influential author to call these relations a “principle” was Eddington, who, in his Gifford Lectures of 1928 referred to them as the “Principle of Indeterminacy”. In the English literature the name uncertainty principle became most common. It is used both by Condon and Robertson in 1929, and also in the English version of Heisenberg’s Chicago Lectures (Heisenberg 1930), although, remarkably, nowhere in the original German version of the same book (see also Cassidy 1998). Indeed, Heisenberg never seems to have endorsed the name “principle” for his relations. His favourite terminology was “inaccuracy relations” (Ungenauigkeitsrelationen) or “indeterminacy relations” (Unbestimmtheitsrelationen). We know only one passage, in Heisenberg’s own Gifford lectures, delivered in 1955–56 (Heisenberg 1958: 43), where he mentioned that his relations “are usually called relations of uncertainty or principle of indeterminacy”. But this can well be read as his yielding to common practice rather than his own preference. But does the relation (2) qualify as a principle of quantum mechanics? Several authors, foremost Karl Popper (1967), have contested this view. Popper argued that the uncertainty relations cannot be granted the status of a principle on the grounds that they are derivable from the theory, whereas one cannot obtain the theory from the uncertainty relations. (The argument being that one can never derive any equation, say, the Schrödinger equation, or the commutation relation (1) , from an inequality.) Popper’s argument is, of course, correct but we think it misses the point. There are many statements in physical theories which are called principles even though they are in fact derivable from other statements in the theory in question. A more appropriate departing point for this issue is not the question of logical priority but rather Einstein’s distinction between “constructive theories” and “principle theories”. Einstein proposed this famous classification in Einstein 1919. Constructive theories are theories which postulate the existence of simple entities behind the phenomena. They endeavour to reconstruct the phenomena by framing hypotheses about these entities. Principle theories, on the other hand, start from empirical principles, i.e., general statements of empirical regularities, employing no or only a bare minimum of theoretical terms. The purpose is to build up the theory from such principles. That is, one aims to show how these empirical principles provide sufficient conditions for the introduction of further theoretical concepts and structure. The prime example of a theory of principle is thermodynamics. Here the role of the empirical principles is played by the statements of the impossibility of various kinds of perpetual motion machines. These are regarded as expressions of brute empirical fact, providing the appropriate conditions for the introduction of the concepts of energy and entropy and their properties. (There is a lot to be said about the tenability of this view, but that is not our topic here.) Now obviously, once the formal thermodynamic theory is built, one can also derive the impossibility of the various kinds of perpetual motion. (They would violate the laws of energy conservation and entropy increase.) But this derivation should not misguide one into thinking that they were no principles of the theory after all. The point is just that empirical principles are statements that do not rely on the theoretical concepts (in this case entropy and energy) for their meaning. They are interpretable independently of these concepts and, further, their validity on the empirical level still provides the physical content of the theory. A similar example is provided by special relativity, another theory of principle, which Einstein deliberately designed after the ideal of thermodynamics. Here, the empirical principles are the light postulate and the relativity principle. Again, once we have built up the modern theoretical formalism of the theory (Minkowski space-time), it is straightforward to prove the validity of these principles. But again this does not count as an argument for claiming that they were no principles after all. So the question whether the term “principle” is justified for Heisenberg’s relations, should, in our view, be understood as the question whether they are conceived of as empirical principles. One can easily show that this idea was never far from Heisenberg’s intentions. We have already seen that Heisenberg presented the relations as the result of a “pure fact of experience”. A few months after his 1927 paper, he wrote a popular paper “Über die Grundprincipien der Quantenmechanik” (“On the fundamental principles of quantum mechanics”) where he made the point even more clearly. Here Heisenberg described his recent break-through in the interpretation of the theory as follows: “It seems to be a general law of nature that we cannot determine position and velocity simultaneously with arbitrary accuracy”. Now actually, and in spite of its title, the paper does not identify or discuss any “fundamental principle” of quantum mechanics. So, it must have seemed obvious to his readers that he intended to claim that the uncertainty relation was a fundamental principle, forced upon us as an empirical law of nature, rather than a result derived from the formalism of the theory. This reading of Heisenberg’s intentions is corroborated by the fact that, even in his 1927 paper, applications of his relation frequently present the conclusion as a matter of principle. For example, he says “In a stationary state of an atom its phase is in principle indeterminate” (Heisenberg 1927: 177, [emphasis added]). Similarly, in a paper of 1928, he described the content of his relations as: It has turned out that it is in principle impossible to know, to measure the position and velocity of a piece of matter with arbitrary accuracy. (Heisenberg 1984: 26, [emphasis added]) So, although Heisenberg did not originate the tradition of calling his relations a principle, it is not implausible to attribute the view to him that the uncertainty relations represent an empirical principle that could serve as a foundation of quantum mechanics. In fact, his 1927 paper expressed this desire explicitly: Surely, one would like to be able to deduce the quantitative laws of quantum mechanics directly from their anschaulich foundations, that is, essentially, relation [ (2) ]. (ibid: 196) This is not to say that Heisenberg was successful in reaching this goal, or that he did not express other opinions on other occasions. Let us conclude this section with three remarks. First, if the uncertainty relation is to serve as an empirical principle, one might well ask what its direct empirical support is. In Heisenberg’s analysis, no such support is mentioned. His arguments concerned thought experiments in which the validity of the theory, at least at a rudimentary level, is implicitly taken for granted. Jammer (1974: 82) conducted a literature search for high precision experiments that could seriously test the uncertainty relations and concluded they were still scarce in 1974. Real experimental support for the uncertainty relations in experiments in which the inaccuracies are close to the quantum limit have come about only more recently (see Kaiser, Werner, and George 1983; Uffink 1985; Nairz, Andt, and Zeilinger 2002). A second point is the question whether the theoretical structure or the quantitative laws of quantum theory can indeed be derived on the basis of the uncertainty principle, as Heisenberg wished. Serious attempts to build up quantum theory as a full-fledged Theory of Principle on the basis of the uncertainty principle have never been carried out. Indeed, the most Heisenberg could and did claim in this respect was that the uncertainty relations created “room” (Heisenberg 1927: 180) or “freedom” (Heisenberg 1931: 43) for the introduction of some non-classical mode of description of experimental data, not that they uniquely lead to the formalism of quantum mechanics. A serious proposal to approach quantum mechanics as a theory of principle was provided more recently by Bub (2000) and Chiribella & Spekkens (2016). But, remarkably, this proposal does not use the uncertainty relation as one of its fundamental principles. Third, it is remarkable that in his later years Heisenberg put a somewhat different gloss on his relations. In his autobiography Der Teil und das Ganze of 1969 he described how he had found his relations inspired by a remark by Einstein that “it is the theory which decides what one can observe”—thus giving precedence to theory above experience, rather than the other way around. Some years later he even admitted that his famous discussions of thought experiments were actually trivial since … if the process of observation itself is subject to the laws of quantum theory, it must be possible to represent its result in the mathematical scheme of this theory. (Heisenberg 1975: 6) 2.5 Mathematical elaboration When Heisenberg introduced his relation, his argument was based only on qualitative examples. He did not provide a general, exact derivation of his relations.[ 3 ] Indeed, he did not even give a definition of the uncertainties \(\delta q\), etc., occurring in these relations. Of course, this was consistent with the announced goal of that paper, i.e., to provide some qualitative understanding of quantum mechanics for simple experiments. The first mathematically exact formulation of the uncertainty relations is due to Kennard. He proved in 1927 the theorem that for all normalized state vectors \(\ket{\psi}\) the following inequality holds: \[\tag{9} \Delta_{\psi}\bP \Delta_{\psi}\bQ \ge \hslash/2 \] Here, \(\Delta_{\psi}\bP\) and \(\Delta_{\psi}\bQ\) are standard deviations of position and momentum in the state vector \(\ket{\psi}\), i.e., \[\tag{10} \begin{align*} (\Delta_{\psi}\bP)^2 &= \expval{\bP^2}_{\psi} - \expval{\bP}_{\psi}^2 \\ (\Delta_{\psi}\bQ)^2 &= \expval{\bQ^2}_{\psi} - \expval{\bQ}_{\psi}^2 \end{align*}\] where \(\expval{\cdot}_{\psi} = \expvalexp{\cdot}{\psi}\) denotes the expectation value in state \(\ket{\psi}\). Equivalently we can use the wave function \(\psi(q)\) and its Fourier transform: \[\begin{align*} \tag{11} \psi(q) &= \braket{q}{\psi} \\ \notag \tilde{\psi}(p) & = \braket{p}{\psi} =\frac{1}{\sqrt{2\pi \hbar} }\int \! \! dq\, e^{-ipq/\hbar} \psi(q) \end{align*}\] to write \[\begin{align*} (\Delta_\psi {\bQ})^2 & = \! \int\!\! dq\, \abs{\psi(q)}^2 q^2 - \left(\int \!\!dq \, \abs{\psi(q)}^2 q \right)^2 \\ (\Delta_\psi {\bP})^2 & = \! \int \!\!dp \, \abs{\tilde{\psi}(p)}^2 p^2 - \left(\int\!\!dp \, \abs{\tilde{\psi}(p)}^2 p \right)^2 \end{align*}\] The inequality (9) was generalized by Robertson (1929) who proved that for all observables (self-adjoint operators) \(\bA\) and \(\bB\): \[\tag{12} \Delta _{\psi}\bA \Delta_{\psi}\bB \ge \frac{1}{2} \abs{\expval{[\bA,\bB]}_{\psi}} \] where \([\bA,\bB] := \bA\bB - \bB\bA\) denotes the commutator. Since the above inequalities (9) and (12) have the virtue of being exact, in contrast to Heisenberg’s original semi-quantitative formulation, it is tempting to regard them as the exact counterpart of Heisenberg’s relations (2) – (4) . Indeed, such was Heisenberg’s own view. In his Chicago Lectures (Heisenberg 1930: 15–19), he presented Kennard’s derivation of relation (9) and claimed that “this proof does not differ at all in mathematical content” from his semi-quantitative argument, the only difference being that now “the proof is carried through exactly”. But it may be useful to point out that both in status and intended role there is a difference between Kennard’s inequality and Heisenberg’s previous formulation (2) . The inequalities discussed here are not statements of empirical fact, but theorems of the quantum mechanical formalism. As such, they presuppose the validity of this formalism, and in particular the commutation relation (1) , rather than elucidating its intuitive content or to create “room” or “freedom” for the validity of this formalism. At best, one should see the above inequalities as showing that the formalism is consistent with Heisenberg’s empirical principle. This situation is similar to that arising in other theories of principle where, as noted in Section 2.4 , one often finds that, next to an empirical principle, the formalism also provides a corresponding theorem. And similarly, this situation should not, by itself, cast doubt on the question whether Heisenberg’s relation can be regarded as a principle of quantum mechanics. There is a second notable difference between (2) and (9) . Heisenberg did not give a general definition for the “uncertainties” \(\delta p\) and \(\delta q\). The most definite remark he made about them was that they could be taken as “something like the mean error”. In the discussions of thought experiments, he and Bohr would always quantify uncertainties on a case-to-case basis by choosing some parameters which happened to be relevant to the experiment at hand. By contrast, the inequalities (9) and (12) employ a single specific expression as a measure for “uncertainty”: the standard deviation. At the time, this choice was not unnatural, given that this expression is well-known and widely used in error theory and the description of statistical fluctuations. However, there was very little or no discussion of whether this choice was appropriate for a general formulation of the uncertainty relations. A standard deviation reflects the spread or expected fluctuations in a series of measurements of an observable in a given state. It is not at all easy to connect this idea with the concept of the “inaccuracy” of a measurement, such as the resolving power of a microscope. In fact, even though Heisenberg had taken Kennard’s inequality as the precise formulation of the uncertainty relation, he and Bohr never relied on standard deviations in their many discussions of thought experiments, and indeed, it has been shown (Uffink and Hilgevoord 1985; Hilgevoord and Uffink 1988) that these discussions cannot be framed in terms of standard deviations. Another problem with the above elaboration is that the “well-known” relations (5) are actually false if energy \(\boldsymbol{E}\) and action \(\boldsymbol{J}\) are to be positive operators (Jordan 1927). In that case, self-adjoint operators \(\boldsymbol{t}\) and \(\boldsymbol{w}\) do not exist and inequalities analogous to (9) cannot be derived. Also, these inequalities do not hold for angle and angular momentum (Uffink 1990). These obstacles have led to a quite extensive literature on time-energy and angle-action uncertainty relations (Busch 1990; Hilgevoord 1996, 1998, 2005; Muga et al. 2002; Hilgevoord and Atkinson 2011; Pashby 2015). 3. Bohr In spite of the fact that Heisenberg’s and Bohr’s views on quantum mechanics are often lumped together as (part of) “the Copenhagen interpretation”, there is considerable difference between their views on the uncertainty relations. 3.1 From wave-particle duality to complementarity Long before the development of modern quantum mechanics, Bohr had been particularly concerned with the problem of particle-wave duality, i.e., the problem that experimental evidence on the behaviour of both light and matter seemed to demand a wave picture in some cases, and a particle picture in others. Yet these pictures are mutually exclusive. Whereas a particle is always localized, the very definition of the notions of wavelength and frequency requires an extension in space and in time. Moreover, the classical particle picture is incompatible with the characteristic phenomenon of interference. His long struggle with wave-particle duality had prepared him for a radical step when the dispute between matrix and wave mechanics broke out in 1926–27. For the main contestants, Heisenberg and Schrödinger, the issue at stake was which view could claim to provide a single coherent and universal framework for the description of the observational data. The choice was, essentially between a description in terms of continuously evolving waves, or else one of particles undergoing discontinuous quantum jumps. By contrast, Bohr insisted that elements from both views were equally valid and equally needed for an exhaustive description of the data. His way out of the contradiction was to renounce the idea that the pictures refer, in a literal one-to-one correspondence, to physical reality. Instead, the applicability of these pictures was to become dependent on the experimental context. This is the gist of the viewpoint he called “complementarity”. Bohr first conceived the general outline of his complementarity argument in early 1927, during a skiing holiday in Norway, at the same time when Heisenberg wrote his uncertainty paper. When he returned to Copenhagen and found Heisenberg’s manuscript, they got into an intense discussion. On the one hand, Bohr was quite enthusiastic about Heisenberg’s ideas which seemed to fit wonderfully with his own thinking. Indeed, in his subsequent work, Bohr always presented the uncertainty relations as the symbolic expression of his complementarity viewpoint. On the other hand, he criticized Heisenberg severely for his suggestion that these relations were due to discontinuous changes occurring during a measurement process. Rather, Bohr argued, their proper derivation should start from the indispensability of both particle and wave concepts. He pointed out that the uncertainties in the experiment did not exclusively arise from the discontinuities but also from the fact that in the experiment we need to take into account both the particle theory and the wave theory. It is not so much the unknown disturbance which renders the momentum of the electron uncertain but rather the fact that the position and the momentum of the electron cannot be simultaneously defined in this experiment (see the “Addition in Proof” to Heisenberg’s paper). We shall not go too deeply into the matter of Bohr’s interpretation of quantum mechanics since we are mostly interested in Bohr’s view on the uncertainty principle. For a more detailed discussion of the former we refer to Scheibe (1973), Folse (1985), Honner (1987) and Murdoch (1987). It may be useful, however, to sketch some of the main points. Central in Bohr’s considerations is the language we use in physics. No matter how abstract and subtle the concepts of modern physics may be, they are essentially an extension of our ordinary language and a means to communicate the results of our experiments. These results, obtained under well-defined experimental circumstances, are what Bohr calls the “phenomena”. A phenomenon is “the comprehension of the effects observed under given experimental conditions” (Bohr 1939: 24), it is the resultant of a physical object, a measuring apparatus and the interaction between them in a concrete experimental situation. The essential difference between classical and quantum physics is that in quantum physics the interaction between the object and the apparatus cannot be made arbitrarily small; the interaction must at least comprise one quantum. This is expressed by Bohr’s quantum postulate: [… the] essence [of the formulation of the quantum theory] may be expressed in the so-called quantum postulate, which attributes to any atomic process an essential discontinuity or rather individuality, completely foreign to classical theories and symbolized by Planck’s quantum of action. (Bohr 1928: 580) A phenomenon, therefore, is an indivisible whole and the result of a measurement cannot be considered as an autonomous manifestation of the object itself independently of the measurement context. The quantum postulate forces upon us a new way of describing physical phenomena: In this situation, we are faced with the necessity of a radical revision of the foundation for the description and explanation of physical phenomena. Here, it must above all be recognized that, however far quantum effects transcend the scope of classical physical analysis, the account of the experimental arrangement and the record of the observations must always be expressed in common language supplemented with the terminology of classical physics. (Bohr 1948: 313) This is what Scheibe (1973) has called the “buffer postulate” because it prevents the quantum from penetrating into the classical description: A phenomenon must always be described in classical terms; Planck’s constant does not occur in this description. Together, the two postulates induce the following reasoning. In every phenomenon the interaction between the object and the apparatus comprises at least one quantum. But the description of the phenomenon must use classical notions in which the quantum of action does not occur. Hence, the interaction cannot be analysed in this description. On the other hand, the classical character of the description allows us to speak in terms of the object itself. Instead of saying: “the interaction between a particle and a photographic plate has resulted in a black spot in a certain place on the plate”, we are allowed to forgo mentioning the apparatus and say: “the particle has been found in this place”. The experimental context, rather than changing or disturbing pre-existing properties of the object, defines what can meaningfully be said about the object. Because the interaction between object and apparatus is left out in our description of the phenomenon, we do not get the whole picture. Yet, any attempt to extend our description by performing the measurement of a different observable quantity of the object, or indeed, on the measurement apparatus, produces a new phenomenon and we are again confronted with the same situation. Because of the unanalyzable interaction in both measurements, the two descriptions cannot, generally, be united into a single picture. They are what Bohr calls complementary descriptions: [the quantum of action]…forces us to adopt a new mode of description designated as complementary in the sense that any given application of classical concepts precludes the simultaneous use of other classical concepts which in a different connection are equally necessary for the elucidation of the phenomena. (Bohr 1929: 10) The most important example of complementary descriptions is provided by the measurements of the position and momentum of an object. If one wants to measure the position of the object relative to a given spatial frame of reference, the measuring instrument must be rigidly fixed to the bodies which define the frame of reference. But this implies the impossibility of investigating the exchange of momentum between the object and the instrument and we are cut off from obtaining any information about the momentum of the object. If, on the other hand, one wants to measure the momentum of an object the measuring instrument must be able to move relative to the spatial reference frame. Bohr here assumes that a momentum measurement involves the registration of the recoil of some movable part of the instrument and the use of the law of momentum conservation. The looseness of the part of the instrument with which the object interacts entails that the instrument cannot serve to accurately determine the position of the object. Since a measuring instrument cannot be rigidly fixed to the spatial reference frame and, at the same time, be movable relative to it, the experiments which serve to precisely determine the position and the momentum of an object are mutually exclusive. Of course, in itself, this is not at all typical for quantum mechanics. But, because the interaction between object and instrument during the measurement can neither be neglected nor determined the two measurements cannot be combined. This means that in the description of the object one must choose between the assignment of a precise position or of a precise momentum. Similar considerations hold with respect to the measurement of time and energy. Just as the spatial coordinate system must be fixed by means of solid bodies so must the time coordinate be fixed by means of unperturbed, synchronised clocks. But it is precisely this requirement which prevents one from taking into account of the exchange of energy with the instrument if this is to serve its purpose. Conversely, any conclusion about the object based on the conservation of energy prevents following its development in time. The conclusion is that in quantum mechanics we are confronted with a complementarity between two descriptions which are united in the classical mode of description: the space-time description (or coordination) of a process and the description based on the applicability of the dynamical conservation laws. The quantum forces us to give up the classical mode of description (also called the “causal” mode of description by Bohr[ 4 ]: it is impossible to form a classical picture of what is going on when radiation interacts with matter as, e.g., in the Compton effect. Any arrangement suited to study the exchange of energy and momentum between the electron and the photon must involve a latitude in the space-time description sufficient for the definition of wave-number and frequency which enter in the relation [\(E = h\nu\) and \(p = h\sigma\)]. Conversely, any attempt of locating the collision between the photon and the electron more accurately would, on account of the unavoidable interaction with the fixed scales and clocks defining the space-time reference frame, exclude all closer account as regards the balance of momentum and energy. (Bohr 1949: 210) A causal description of the process cannot be attained; we have to content ourselves with complementary descriptions. “The viewpoint of complementarity may be regarded”, according to Bohr, “as a rational generalization of the very ideal of causality”. In addition to complementary descriptions Bohr also talks about complementary phenomena and complementary quantities. Position and momentum, as well as time and energy, are complementary quantities.[ 5 ] We have seen that Bohr’s approach to quantum theory puts heavy emphasis on the language used to communicate experimental observations, which, in his opinion, must always remain classical. By comparison, he seemed to put little value on arguments starting from the mathematical formalism of quantum theory. This informal approach is typical of all of Bohr’s discussions on the meaning of quantum mechanics. One might say that for Bohr the conceptual clarification of the situation has primary importance while the formalism is only a symbolic representation of this situation. This is remarkable since, finally, it is the formalism which needs to be interpreted. This neglect of the formalism is one of the reasons why it is so difficult to get a clear understanding of Bohr’s interpretation of quantum mechanics and why it has aroused so much controversy. We close this section by citing from an article of 1948 to show how Bohr conceived the role of the formalism of quantum mechanics: The entire formalism is to be considered as a tool for deriving predictions, of definite or statistical character, as regards information obtainable under experimental conditions described in classical terms and specified by means of parameters entering into the algebraic or differential equations of which the matrices or the wave-functions, respectively, are solutions. These symbols themselves, as is indicated already by the use of imaginary numbers, are not susceptible to pictorial interpretation; and even derived real functions like densities and currents are only to be regarded as expressing the probabilities for the occurrence of individual events observable under well-defined experimental conditions. (Bohr 1948: 314) 3.2 Bohr’s view on the uncertainty relations In his Como lecture, published in 1928, Bohr gave his own version of a derivation of the uncertainty relations between position and momentum and between time and energy. He started from the relations \[\tag{13} E = h\nu \text{ and } p = h/\lambda\] which connect the notions of energy \(E\) and momentum \(p\) from the particle picture with those of frequency \(\nu\) and wavelength \(\lambda\) from the wave picture. He noticed that a wave packet of limited extension in space and time can only be built up by the superposition of a number of elementary waves with a large range of wave numbers and frequencies. Denoting the spatial and temporal extensions of the wave packet by \(\Delta x\) and \(\Delta t\), and the extensions in the wave number \(\sigma := 1/\lambda\) and frequency by \(\Delta \sigma\) and \(\Delta \nu\), it follows from Fourier analysis that in the most favorable case \(\Delta x \Delta \sigma \approx \Delta t \Delta \nu \approx 1\), and, using (13), one obtains the relations \[\tag{14} \Delta t \Delta E \approx \Delta x \Delta p \approx h\] Note that \(\Delta x, \Delta \sigma\), etc., are not standard deviations but unspecified measures of the size of a wave packet. (The original text has equality signs instead of approximate equality signs, but, since Bohr does not define the spreads exactly the use of approximate equality signs seems more in line with his intentions. Moreover, Bohr himself used approximate equality signs in later presentations.) These equations determine, according to Bohr: the highest possible accuracy in the definition of the energy and momentum of the individuals associated with the wave field. (Bohr 1928: 571). He noted, This circumstance may be regarded as a simple symbolic expression of the complementary nature of the space-time description and the claims of causality. (ibid).[ 6 ] We note a few points about Bohr’s view on the uncertainty relations. First of all, Bohr does not refer to discontinuous changes in the relevant quantities during the measurement process. Rather, he emphasizes the possibility of defining these quantities. This view is markedly different from Heisenberg’s view. A draft version of the Como lecture is even more explicit on the difference between Bohr and Heisenberg: These reciprocal uncertainty relations were given in a recent paper of Heisenberg as the expression of the statistical element which, due to the feature of discontinuity implied in the quantum postulate, characterizes any interpretation of observations by means of classical concepts. It must be remembered, however, that the uncertainty in question is not simply a consequence of a discontinuous change of energy and momentum say during an interaction between radiation and material particles employed in measuring the space-time coordinates of the individuals. According to the above considerations the question is rather that of the impossibility of defining rigorously such a change when the space-time coordination of the individuals is also considered. (Bohr 1985: 93) Indeed, Bohr not only rejected Heisenberg’s argument that these relations are due to discontinuous disturbances implied by the act of measuring, but also his view that the measurement process creates a definite result: The unaccustomed features of the situation with which we are confronted in quantum theory necessitate the greatest caution as regard all questions of terminology. Speaking, as it is often done of disturbing a phenomenon by observation, or even of creating physical attributes to objects by measuring processes is liable to be confusing, since all such sentences imply a departure from conventions of basic language which even though it can be practical for the sake of brevity, can never be unambiguous. (Bohr 1939: 24) Nor did he approve of an epistemological formulation or one in terms of experimental inaccuracies: […] a sentence like “we cannot know both the momentum and the position of an atomic object” raises at once questions as to the physical reality of two such attributes of the object, which can be answered only by referring to the mutual exclusive conditions for an unambiguous use of space-time concepts, on the one hand, and dynamical conservation laws on the other hand. (Bohr 1948: 315; also Bohr 1949: 211) It would in particular not be out of place in this connection to warn against a misunderstanding likely to arise when one tries to express the content of Heisenberg’s well-known indeterminacy relation by such a statement as “the position and momentum of a particle cannot simultaneously be measured with arbitrary accuracy”. According to such a formulation it would appear as though we had to do with some arbitrary renunciation of the measurement of either the one or the other of two well-defined attributes of the object, which would not preclude the possibility of a future theory taking both attributes into account on the lines of the classical physics. (Bohr 1937: 292) Instead, Bohr always stressed that the uncertainty relations are first and foremost an expression of complementarity. This may seem odd since complementarity is a dichotomic relation between two types of description whereas the uncertainty relations allow for intermediate situations between two extremes. They “express” the dichotomy in the sense that if we take the energy and momentum to be perfectly well-defined, symbolically \(\Delta E = \Delta p\) = 0, the position and time variables are completely undefined, \(\Delta x = \Delta t = \infty\), and vice versa. But they also allow intermediate situations in which the mentioned uncertainties are all non-zero and finite. This more positive aspect of the uncertainty relation is mentioned in the Como lecture: At the same time, however, the general character of this relation makes it possible to a certain extent to reconcile the conservation laws with the space-time coordination of observations, the idea of a coincidence of well-defined events in space-time points being replaced by that of unsharply defined individuals within space-time regions. (Bohr 1928: 571) However, Bohr never followed up on this suggestion that we might be able to strike a compromise between the two mutually exclusive modes of description in terms of unsharply defined quantities. Indeed, an attempt to do so, would take the formalism of quantum theory more seriously than the concepts of classical language, and this step Bohr refused to take. Instead, in his later writings he would be content with stating that the uncertainty relations simply defy an unambiguous interpretation in classical terms: These so-called indeterminacy relations explicitly bear out the limitation of causal analysis, but it is important to recognize that no unambiguous interpretation of such a relation can be given in words suited to describe a situation in which physical attributes are objectified in a classical way. (Bohr 1948: 315) Finally, on a more formal level, we note that Bohr’s derivation does not rely on the commutation relations (1) and (5) , but on Fourier analysis. These two approaches are equivalent as far as the relationship between position and momentum is concerned, but this is not so for time and energy since most physical systems do not have a time operator. Indeed, in his discussion with Einstein (Bohr 1949), Bohr considered time as a simple classical variable. This even holds for his famous discussion of the “clock-in-the-box” thought-experiment where the time, as defined by the clock in the box, is treated from the point of view of classical general relativity. Thus, in an approach based on commutation relations, the position-momentum and time-energy uncertainty relations are not on equal footing, which is contrary to Bohr’s approach in terms of Fourier analysis. For more details see (Hilgevoord 1996 and 1998). 4. The Minimal Interpretation In the previous two sections we have seen how both Heisenberg and Bohr attributed a far-reaching status to the uncertainty relations. They both argued that these relations place fundamental limits on the applicability of the usual classical concepts. Moreover, they both believed that these limitations were inevitable and forced upon us. However, we have also seen that they reached such conclusions by starting from radical and controversial assumptions. This entails, of course, that their radical conclusions remain unconvincing for those who reject these assumptions. Indeed, the operationalist-positivist viewpoint adopted by these authors has long since lost its appeal among philosophers of physics. So the question may be asked what alternative views of the uncertainty relations are still viable. Of course, this problem is intimately connected with that of the interpretation of the wave function, and hence of quantum mechanics as a whole. Since there is no consensus about the latter, one cannot expect consensus about the interpretation of the uncertainty relations either. Here we only describe a point of view, which we call the “minimal interpretation”, that seems to be shared by both the adherents of the Copenhagen interpretation and of other views. In quantum mechanics a system is supposed to be described by its wave function, also called its quantum state or state vector. Given the state vector \(\ket{\psi}\), one can derive probability distributions for all the physical quantities pertaining to the system, usually called its observables, such as its position, momentum, angular momentum, energy, etc. The operational meaning of these probability distributions is that they correspond to the distribution of the values obtained for these quantities in a long series of repetitions of the measurement. More precisely, one imagines a great number of copies of the system under consideration, all prepared in the same way. On each copy the momentum, say, is measured. Generally, the outcomes of these measurements differ and a distribution of outcomes is obtained. The theoretical momentum distribution derived from the quantum state is supposed to coincide with the hypothetical distribution of outcomes obtained in an infinite series of repetitions of the momentum measurement. The same holds, mutatis mutandis, for all the other physical quantities pertaining to the system. Note that no simultaneous measurements of two or more quantities are required in defining the operational meaning of the probability distributions. The uncertainty relations discussed above can be considered as statements about the spreads of the probability distributions of the several physical quantities arising from the same state. For example, the uncertainty relation between the position and momentum of a system may be understood as the statement that the position and momentum distributions cannot both be arbitrarily narrow—in some sense of the word “narrow”—in any quantum state. Inequality (9) is an example of such a relation in which the standard deviation is employed as a measure of spread. From this characterization of uncertainty relations follows that a more detailed interpretation of the quantum state than the one given in the previous paragraph is not required to study uncertainty relations as such. In particular, a further ontological or linguistic interpretation of the notion of uncertainty, as limits on the applicability of our concepts given by Heisenberg or Bohr, need not be supposed. Of course, this minimal interpretation leaves the question open whether it makes sense to attribute precise values of position and momentum to an individual system. Some interpretations of quantum mechanics, e.g., those of Heisenberg and Bohr, deny this; while others, e.g., the interpretation of de Broglie and Bohm insist that each individual system has a definite position and momentum (see the entry on Bohmian mechanics ). The only requirement is that, as an empirical fact, it is not possible to prepare pure ensembles in which all systems have the same values for these quantities, or ensembles in which the spreads are smaller than allowed by quantum theory. Although interpretations of quantum mechanics, in which each system has a definite value for its position and momentum are still viable, this is not to say that they are without strange features of their own; they do not imply a return to classical physics. We end with a few remarks on this minimal interpretation. First, it may be noted that the minimal interpretation of the uncertainty relations is little more than filling in the empirical meaning of inequality (9) . As such, this view shares many of the limitations we have noted above about this inequality. Indeed, it is not straightforward to relate the spread in a statistical distribution of measurement results with the inaccuracy of this measurement, such as, e.g., the resolving power of a microscope, or of a disturbance of the system by the measurement. Moreover, the minimal interpretation does not address the question whether one can make simultaneous accurate measurements of position and momentum. As a matter of fact, one can show that the standard formalism of quantum mechanics does not allow such simultaneous measurements. But this is not a consequence of relation (9) . Rather, it follows from the fact that this formalism simply does not contain any observable that would accomplish such a task. The extension of this formalism that allows observables to be represented by positive-operator-valued measures or POVM’s, does allow the formal introduction of observables describing joint measurements (see also section 6.1 ). But even here, for the case of position and momentum, one finds that such measurements have to be “unsharp”, which entails that they cannot be regarded as simultaneous accurate measurements. If one feels that statements about inaccuracy of measurement, or the possibility of simultaneous measurements, belong to any satisfactory formulation of the uncertainty principle, one will need to look for other formulations of the uncertainty principle. Some candidates for such formulations will be discussed in Section 6 . First, however, we will look at formulations of the uncertainty principle that stay firmly within the minimal interpretation, and differ from (9) only by using measures of uncertainty other than the standard deviation. 5. Alternative measures of uncertainty While the standard deviation is the most well-known quantitative measure for uncertainty or the spread in the probability distribution, it is not the only one, and indeed it has distinctive drawbacks that other such measures may lack. For example, in the definition of the standard deviations (11) one can see that that the probability density function \(\abs{\psi(q)}^2\) is weighed by a quadratic factor \(q^2\) that puts increasing emphasis on its tails. Therefore, the value of \(\Delta_\psi \bQ\) will depend predominantly at how this density behaves at the tails: if these falls off very quickly, e.g., like a Gaussian, it will be small, but if the tails drop off only slowly the standard deviation may be very large, even when most probability is concentrated in a small interval. The upshot of this objection is that having a lower bound on the product of the standard deviations of position and momentum, as the Heisenberg-Kennard uncertainty relation (9) gives, does not by itself rule out a state where both the probability densities for position and momentum are extremely concentrated, in the sense of having more than \((1- \epsilon)\) of their probability concentrated in a a region of size smaller than \(\delta\), for any choice of \(\epsilon, \delta >0\). This means, in our view, that relation (9) actually fails to express what most physicists would take to be the very core idea of the uncertainty principle. One way to deal with this objection is to consider alternative measures to quantify the spread or uncertainty associated with a probability density. Here we discuss two such proposals. 5.1 Landau-Pollak uncertainty relations The most straightforward alternative is to pick some value \(\alpha\) close to one, say \(\alpha = 0.9\), and ask for the width of the smallest interval that supports the fraction \(\alpha\) of the total probability distribution in position and similarly for momentum: \[\begin{align*} \tag{15} W_{\alpha}(\bQ, \psi) &:= \inf_{\abs{I}} \left\{ I: \int_I {\abs{\psi(q)}}^2 dq \geq \alpha \right\} \\ \notag W_{\beta}(\bP,\psi) &:= \inf_I \left\{\int_I \abs{\tilde\psi(p)}^2 dp \geq \beta \right\} \end{align*}\] In a previous work (Uffink and Hilgevoord 1985) we called such measures bulk widths, because they indicate how concentrated the ”bulk” (i.e., fraction \(\alpha\) or \(\beta\)) of the probability distribution is. Landau and Pollak (1961) obtained an uncertainty relation in terms of these bulk widths. \[\begin{align*} \tag{16} W_\alpha (\bQ, \psi) W_\beta (\bP, \psi) &\geq 2\pi \hbar \left( \alpha \beta - \sqrt{(1-\alpha)(1-\beta)} \right)^2 \\ \notag &\mbox{if } \alpha+ \beta \geq 1/2 \end{align*}\label{LP}\] This Landau-Pollak inequality shows that if the choices of \(\alpha, \beta\) are not too low, there is a state-independent lower bound on the product of the bulk widths of the position and momentum distribution for any quantum state. Note that bulk widths are not so sensitive to the behavior of the tails of the distributions and, therefore, the Landau-Pollak inequality is immune to the objection above.Thus, this inequality expresses constraints on quantum mechanical states not contained in relation (9) . Further, by the well-known Bienaymé-Chebyshev inequality, one has \[\begin{align*} \tag{17} W_\alpha (\bQ,\psi) &\leq \frac{2}{\sqrt {1- \alpha}} \Delta_\psi \bQ \\ \notag W_\beta (\bP, \psi) &\leq \frac{2}{\sqrt {1- \beta}} \Delta_\psi \bP \end{align*}\] so that inequality (16) implies (by choosing \(\alpha,\beta\) optimal) that \( \Delta_\psi \bQ \Delta_\psi \bP \geq 0.12 \hbar \). This, obviously, is not the best lower bound for the product of standard deviations, but the important point is here that the Landau-Pollak inequality (16) in terms of bulk widths implies the existence of a lower bound on the product of standard deviations, while conversely, the Heisenberg-Kennard equality (9) does not imply any bound on the product of bulk widths. A generalization of this approach to non-commuting observables in a finite-dimensional Hilbert space is discussed in Uffink 1990. 5.2 Entropic uncertainty relations Another approach to express the uncertainty principle is to use entropic measures of uncertainty. The foremost example of these is the Shannon entropy, which for the position and momentum distribution of a given state vector \(\ket{\psi}\) may be defined as: \[\begin{align*} \tag{18} H(\bQ, \psi) &:= -\int \abs{\psi(q)}^2 \ln \abs{\psi(q)}^2 dq \\ \notag H(\bP, \psi) &:= -\int \abs{\tilde{\psi}(p)}^2 \ln \abs{\tilde{\psi}(p)}^2 dp \end{align*}\] One can then show (see Beckner 1975; Białinicki-Birula and Micielski 1975) that \[\tag{19} H(\bQ, \psi) + H(\bP,\psi) \geq \ln (e \pi \hbar) \] A nice feature of this entropic uncertainty relation is that it provides a strict improvement of the Heisenberg-Kennard relation. That is to say, one can show (independently of quantum theory) that for any probability density function \(p(x)\) \[\tag{20} -\int\! p(x) \ln p(x) dx \leq \ln (\sqrt{2 \pi e} \Delta x )\] Applying this to the inequality (19) we get: \[\tag{21} \frac{\hslash}{2} \leq(2\pi e)^{-1} \exp (H(\bQ, \psi) + H(\bP,\psi)) \leq \Delta_\psi \bQ\Delta_\psi \bP \] showing that the entropic uncertainty relation implies the Heisenberg-Kennard uncertainty relation. A drawback of this relation is that it does not completely evade the objection mentioned above, (i.e., these entropic measures of uncertainty can become as large as one pleases while \(1-\epsilon\) of the probability in the distribution is concentrated on a very small interval), but the examples needed to show this are admittedly more far-fetched. For non-commuting observables in a \(n\)-dimensional Hilbert space, one can similarly define an entropic uncertainty in the probability distribution \(\abs{\braket{a_i}{\psi}}^2\) for a given state \(\ket{\psi}\) and a complete set of eigenstates \(\ket{a_i}\), \( (i= 1, \ldots n)\), of the observable \(\bA\): \[\tag{22} H(\bA ,\psi) := -\sum_{i=1}^n \abs{\braket{a_i}{\psi}}^2 \ln \abs{\braket{a_i}{\psi}}^2 \] and \(H(\bB,\psi)\) similarly in terms of the probability distribution \(\abs{\braket{b_j}{\psi}}^2\) for a complete set of eigenstates \(\ket{b_j}\), (\(j =1, \ldots, n\)) of observable \(\bB\). Then we obtain the uncertainty relation (Maassen and Uffink 1988): \[\tag{23} H( bA, \psi) + H(\bB, \psi) \geq 2 \ln \max_{i,j} \abs{\braket{a_i}{b_j}}, \] which was further generalized and improved by (Frank and Lieb 2012). The most important advantage of these relations is that, in contrast to Robertson’s inequality (12) , the lower bound is a positive constant, independent of the state. 6. Uncertainty relations for inaccuracy and disturbance Both the standard deviation and the alternative measures of uncertainty considered in the previous subsection (and many more that we have not mentioned!) are designed to indicate the width or spread of a single given probability distribution. Applied to quantum mechanics, where the probability distributions for position and momentum are obtained from a given quantum state vector, one can use them to formulate uncertainty relations that characterize the spread in those distribution for any given state. The resulting inequalities then express limitations on what state-preparations quantum mechanics allows. They are thus expressions of what may be called a preparation uncertainty principle: In quantum mechanics, it is impossible to prepare any system in a state \(\ket{\psi}\) such that its position and momentum are both precisely predictable, in the sense of having both the expected spread in a measurement of position and the expected spread in a momentum measurement arbitrarily small. The relations ( 9 , 16 , 19 ) all belong to this category; the mere difference being that they employ different measures of spread: viz. the standard deviation, the bulk width or the Shannon entropy. Note that in this formulation, there is no reference to simultaneous or joint measurements, nor to any notion of accuracy like the resolving power of the measurement instrument, nor to the issue of how much the system in the state that is being measured is disturbed by this measurement. This section is devoted to attempts that go beyond the mold of this preparation uncertain principle. 6.1 The recent debate on error-disturbance relations We have seen that in 1927 Heisenberg argued that the measurement of (say) position must necessarily disturb the conjugate variable (i.e., momentum) by an amount that is inversely proportional to the inaccuracy of measurement of the former. We have also seen that this idea was not maintained in the Kennard’s uncertainty relation (9) , a relation that was embraced by Heisenberg (1930) and most textbooks. A rather natural question thus arises whether there are further inequalities in quantum mechanics that would address Heisenberg’s original thinking more directly, i.e., that do deal with how much one variable is disturbed by the accurate measurement of another. That is, we will look at attempts that would establish a claim which may be called a measurement uncertainty principle. In quantum mechanics, there is no measurement procedure by which one can accurately measure the position of a system without disturbing it momentum, in the sense that some measure of inaccuracy in position and some measure of the disturbance of momentum of the system by the measurement cannot both be arbitrarily small. This formulation of the uncertainty principle has always remained controversial. Uncertainty relations that would express this alleged principle are often called “error-disturbance” relations or “noise-disturbance” relations We will look at two recent proposals to search for such relations: Ozawa (2003) and Busch, Lahti, and Werner (2013). In Ozawa’s approach, we assume that a system \(\cal S\) of interest in state \(\ket{\psi}\) is coupled to a measurement device \(\cal M\) in state \(\ket{\chi}\), and their interaction is governed by a unitary operator \(U\). On the Hilbert space of the joint system the observable \(\bQ\) of the system \(\cal S\) we are interested in is represented by \[\tag{24} \bQ_{\rm in} = \bQ \otimes \mathbb{1}\] The measurement interaction will allow us to perform an (inaccurate) measurement of this quantity by reading off a pointer observable \(\boldsymbol{Q'}\) of the measurement device after the interaction. Hence this inaccurate observable may be represented as \[\tag{25} \bQ'_{\rm out} = U^\dagger( \mathbb{1} \otimes \bQ') U\] The measure of noise in the measurement of \(\bQ\) is then chosen as: \[\tag{26} \epsilon_\psi(\bQ) := \expval{(\bQ'_{\rm out} - \bQ_{\rm in})^2}_{\psi \otimes \chi}^{1/2}\] A comparison of the initial momentum \(\bP_{\rm in} = \bP \otimes \mathbb{1}\) and the final momentum after the measurement \(\bP_{\rm out} = U^\dagger (\bP \otimes \mathbb{1})U\) is made by choosing a measure of the disturbance of \(\bP\) by the measurement procedure: \[\tag{27} \eta_\psi(\bP):= \expval{(\bP_{\rm in} - \bP_{\rm out})^2}_{\psi\otimes\chi}^{1/2} \] Ozawa obtained an inequality involving those two measures, which, however, is more involved than previous uncertainty relations. For our purposes, however, the important point is that Ozawa showed that the product \(\epsilon_\psi (\bQ) \eta_\psi (\bP)\) has no positive lower bound. His conclusion from this was that Heisenberg’s noise-disturbance relation is violated. Yet, whether Ozawa’s result indeed succeeds in formulating Heisenberg’s qualitative discussion of disturbance and accuracy in the microscope example has come under dispute. See Busch, Lahti and Werner (2013, and 2014 (Other Internet Resources)), and Ozawa (2013, Other Internet Resources). An objection raised in this dispute is that a quantity like \(\expval{(\bQ'_{\rm out} - \bQ_{\rm in})^2}^{1/2}\) tells us very little about how good the observable \({\bQ'}_{\rm out}\) can stand in as an inaccurate measurement of \(\bQ_{\rm in}\). The main point to observe here is that these operators generally do not commute, and that measurements of \(\bQ'_{\rm out}\), of \(\bQ_{\rm in}\) and of their difference will require altogether three different measurement contexts. To require that \(\epsilon_\psi(\bQ)\) vanishes, for example, means only that the state prepared belongs to the linear subspace corresponding to the zero eigenvalue of the operator \(\bQ'_{\rm out} - {\bQ}_{\rm in}\), and therefore that \(\expval{\bQ'_{\rm out}}_\psi = \expval{\bQ_{\rm in}}_\psi\), but this does not preclude that the probability distribution of \(\bQ_{\rm out}\) in state \(\psi\) might be wildly different from that of \(\bQ_{\rm in}\). But then no one would think of \(\bQ_{\rm out}\) as an accurate measurement of \(\bQ_{\rm in}\) so that the definition of \(\epsilon_\psi(\bQ)\) does not express what it is supposed to express. A similar objection can also be raised against \(\eta_\psi (\bP)\). Another observation is that Ozawa’s conclusion that there is no lower bound for his error-disturbance product for is not at all surprising. That is, even without probing the system by a measurement apparatus, one can show that such a lower bound does not exist. If the initial state of a system is prepared at time \(t=0\) as a Gaussian quasi-monochromatic wave packet with \(\expval{\bQ_0}_\psi =0\) and evolves freely, we can use a time-of-flight measurement to learn about its later position. Ehrenfest’s theorem tells us: \(\expval{\bQ_t}_\psi = \frac{t}{m} \expval{\bP}_\psi\). Hence, as an approximative measurement of the position \(\bQ_t\), one could propose the observable \(\bQ'_t = \frac{t}{m}\bP\). It is known that under the stated conditions (and with \(m\) and \(t\) large) this approximation holds very well, i.e., we do not only have \(\expval{\bQ'_t -\bQ_t}_\psi =0\), but also \(\expval{(\bQ'_t -\bQ_t)^2} \approx 0\), as nearly as we please. But since \(\bQ'_t\) is just the momentum multiplied by a constant, its measurement will obviously not disturb the momentum of the system. In other words, for this example, one has \(\epsilon_\psi (\bQ)\) as small as we please with zero disturbance of the momentum. Therefore, any hopes that there could be a positive lower bound for the product \(\epsilon_\psi (\bQ) \eta_\psi (\bP)\) seem to be dashed, even with the simplest of measurement schemes, i.e. a free evolution. Ozawa’s results do not show that Heisenberg’s analysis of the microscope argument was wrong. Rather, they throw doubt on the appropriateness of the definitions he used to formalize Heisenberg’s informal argument. An entirely different analysis of the problem of substantiating a measurement uncertainty relation was offered by Busch, Lahti, and Werner (2013). These authors consider a measurement device \(\cal M\) that makes a joint unsharp measurement of both position and momentum. To describe such joint unsharp measurements, they employ the extended modern formalism that characterizes obervables not by self-adjoint operators but by positive-operator-valued measures (POVM’s). In the present case, this means that the measurement procedure is characterized by a collection of positive operators, \(M(p,q)\), where the pair \(p,q\) represent the outcome variables of the measurement, with \[\tag{28} M(p,q) \geq 0, \iint \! dp dq \, M(p,q) =\mathbb{1} .\] The two marginals of this POVM, \[\tag{29} \begin{align*} M_1(q) &= \int\! dp M(p,q)\\ M_2(p) &= \int\! dq M(p,q) \end{align*} \] are also POVM’s in their own right and represent the unsharp position \(Q'\) and unsharp momentum \(P'\) observables respectively. (Note that these do not refer to a self-adjoint operator!) For a system prepared in a state \(\ket{\psi}\), the joint probability density of obtaining outcomes \((p,q)\) in the joint unsharp measurement (28) is then \[\tag{30} \rho(p,q) := \expvalexp{M(p,q)}{\psi},\] while the marginals of this joint probability distribution give the distributions for \(Q'\) and \(P'\). \[\begin{align*} \tag{31} \mu'(q) &: = \int \! dp \, \rho(p,q) = \expvalexp{M_1(q)}{\psi} \\ \notag \nu'(p) &:= \int \! dp \, \rho(p,q) = \expvalexp{M_2(q)}{\psi} \end{align*}\] Since a joint sharp measurement of position and momentum is impossible in quantum mechanics, these marginal distributions ( 31 ) obtained from \(M\) will differ from that of ideal measurements of \(\bQ\) and of \(\bP\) on the system of interest in state \(\ket{\psi}\). However, one can indicate how much these marginals deviate from separate exact position and momentum measurements on the state \(\ket{\psi}\) by a pairwise comparison of ( 31 ) to the exact distributions \[\begin{align*} \tag{32} \mu(q) &:= \abs{\braket{q}{\psi}}^2 \\ \notag \nu(p) &:= \abs{\braket{p}{\psi}}^2 \end{align*}\] In order to do so, BLW propose a distance function \(D\) between probability distributions, such that \(D(\mu, \mu')\) tells us how close the marginal position distribution \(\mu'(q)\) for the unsharp position \(Q'\) is to the exact distribution \(\mu(q)\) in a sharp position measurement, and likewise, \(D(\nu ,\nu')\) tells us how close the marginal momentum distribution \(\nu'(p)\) for \(P'\) is to the the exact momentum distribution \(\nu(p)\). The distance they chose is the Wasserstein-2 distance, a.k.a. (a variation on) the earth-movers distance. Definition (Wasserstein-2 distance) Let \(\mu(x)\) and \(\mu'(y)\) be any two probability distributions on the real line, and \(\gamma(x,y)\) any joint probability distribution that has \(\mu'\) and \(\mu\) as its marginals. Then: \[\tag{33} D(\mu, \mu') := \inf_\gamma \left(\iint (x-y)^2 \gamma (x,y) dx dy \right)^{1/2}\] Applying this definition to the case at hand, i.e.  pairwise to the quantum mechanical distributions \(\mu'(q)\) and \(\mu(q)\) and to \(\nu'(p)\) and \(\nu(p)\) in ( 31 ) and (32) , BLW’s final step is to take a supremum over all possible input states \(\ket{\psi}\) to obtain \[\tag{34} \begin{align*} \Delta(Q, Q') & = \sup_{\ket{\psi}} D(\mu, \mu') \\ \Delta(P, P') & = \sup_{\ket{\psi}} D(\nu, \nu') \end{align*} \] From these definitions, they obtain \[\tag{35}\Delta(Q, Q') \Delta (P,P') \geq \frac{\hbar}{2}\] Arguing that \(\Delta(Q, Q')\) provides a sensible measure for the inaccuracy or noise about position, and \(\Delta(P, P')\) for the disturbance of momentum by any such joint unsharp measurement, the authors conclude, in contrast to Ozawa’s analysis, that an error-disturbance uncertainty relation does hold, which they take as “a remarkable vindication of Heisenberg’s intuitions” in the microscope thought experiment. In comparison of the two, there are a few positive remarks to make about the Busch-Lahti-Werner (BLW) approach. First of all, by focusing on the distance (33) this approach is comparing entire probability distributions rather than just the expectations of operator differences. When this distance is very small, one is justified to conclude that the distribution has changed very little under the measurement procedure. This brings us closer to the conclusion that the error or disturbance introduced is small. Secondly, by introducing a supremum over all states to obtain \(\Delta( Q, Q')\), it follows that when this latter expression is small, the measured distribution \(\mu'\) differs only little from the exact distribution \(\mu\) whatever the state of the system is. As the authors argue, this means that \(\Delta(Q,Q')\) can be seen as a figure-of-merit of the measurement device alone, and in this sense analogous to the resolving power of a microscope. But we also think there is an undesirable feature of the BLW approach. This is due to the supremum over states appearing twice, both in \(\Delta(Q,Q')\) and in \(\Delta(P,P')\). This feature, we argue, deprives their result from practical applicability. To elucidate: In concrete applications, one would prepare a system in some state (not exactly known) and perform a given joint measurement \(M\) of \(Q'\) and \(P'\). If it is given that, say, \(\Delta(Q,Q')\) is very small, one can safely infer that \(Q\) has been measured with small inaccuracy, since this guarantees that the measured position distribution differs very little from what an exact position measurement would give, regardless of the state of the system. Now, one would like to be able to infer that in this case the disturbance of the momentum \(P\) from \(P'\) must be considerable for the state prepared. But the BLW only gives us: \[ \Delta(P, P') = \sup_{\ket{\psi}} D(\nu, \nu') \geq \frac{\hbar}{2 \Delta(Q, Q')} \] and this does not imply anything for the state in question! Thus, the BLW uncertainty relation does not rule out that for some states it might be possible to perform a joint measurement in which both \(D(\mu, \mu')\) and \(D(\nu, \nu')\) are very small, and in this sense have negligibe error and disturbance. It seems premature to say that this vindicates Heisenberg’s intuitions. Summing up, we emphasize that there is no contradiction between the BLW analysis and the Ozawa analysis: where Ozawa claims that the product of two quantities might for some states be less than the usual limit, BLW show that product of different quantities will satisfy this limit. The dispute is not about mathematically validity, but about how reasonable these quantities are to capture Heisenberg’s qualitative considerations. The present authors feel that, in this dispute, Ozawa’s analysis fail to be convincing. On the other hand, we also think that the BLW uncertainty relation is not satisfactory. Also, we would like to remark that both protagonists employ measures that are akin to standard deviations in being very sensitive to the tail behavior of probability distributions, and thus face a similar objection as raised in section 5 . The final word in this dispute on whether a measurement uncertainty principle holds has not been reached, in our view. Bibliography Bacciagaluppi, G. and A. Valentini, 2009, Quantum Theory at the Crossroads; reconsidering the 1927 Solvay Conference, Cambridge: Cambridge University Press. Beller, M., 1999, Quantum Dialogue, Chicago: University of Chicago Press. Beckner, W., 1975, “Inequalities in Fourier analysis”, Annals of Mathematics, 102: 159–182. Białinicki-Birula, I. and J. Micielski, 1975, “Uncertainty relations for information entropy in wave mechanics”, Communications in Mathematical Physics, 44: 129–132. Bohr, N., 1928, “The Quantum postulate and the recent development of atomic theory”, Nature, (Supplement) 121: 580–590. Also in Bohr 1934, Wheeler and Zurek 1983, and Bohr 1985. –––, 1929, “Introductory survey”, in Bohr 1934: 1–24. –––, 1934, Atomic Theory and the Description of Nature, Cambridge: Cambridge University Press. Reissued in 1961. Appeared also as Volume I of The Philosophical Writings of Niels Bohr, Woodbridge, CT: Ox Bow Press, 1987. –––, 1937, “Causality and complementarity”, Philosophy of Science, 4: 289–298. –––, 1939, “The causality problem in atomic physics”, in New Theories in Physics, Paris: International Institute of Intellectual Co-operation. Also in Bohr 1996: 303–322. –––, 1948, “On the notions of causality and complementarity”, Dialectica, 2: 312–319. Also in Bohr 1996: 330–337. –––, 1949, “Discussion with Einstein on epistemological problems in atomic physics”, in Albert Einstein: philosopher-scientist. The library of living philosophers Vol. VII, P.A. Schilpp (ed.), La Salle: Open Court, pp. 201–241. –––, 1985, Collected Works, Volume 6, J. Kalckar (ed.) Amsterdam: North-Holland. –––, 1996, Collected Works, Volume 7, J. Kalckar (ed.) Amsterdam: North-Holland. Bub, J., 2000, “Quantum mechanics as a principle theory”, Studies in History and Philosophy of Modern Physics, 31B: 75–94. Busch, P., 1990, “ On the energy-time uncertainty relation”, Foundations of Physics, 20: 1–32, 33–43. Busch, P., P. Lahti, and R. Werner, 2013, “Proof of Heisenberg’s error-disturbance relation”, Physical Review Letters, 111, 160405. doi:10.1103/PhysRevLett.111.160405 Cassidy, D.C., 1992, Uncertainty, the Life and Science of Werner Heisenberg, New York: Freeman. –––, 1998, “Answer to the question: When did the indeterminacy principle become the uncertainty principle?”, American Journal of Physics, 66: 278–279. Chiribella, G. and R.W. Spekkens, 2016, Quantum Theory, Informational Foundations and Foils, Dordrecht: Springer. Condon, E.U., 1929, “Remarks on uncertainty principles”, Science, 69: 573–574. Eddington, A., 1928, The Nature of the Physical World, Cambridge: Cambridge University Press. Einstein, A., 1919, “My Theory”, The Times (London), November 28, p. 13; reprinted as “What is the theory of relativity?”, in Ideas and Opinions, New York: Crown Publishers, 1954, pp. 227–232. Folse, H.J., 1985, The Philosophy of Niels Bohr, Amsterdam: Elsevier. Frank, R.L. and E.H. Lieb, 2012, “Entropy and the uncertainty principle”, Annales Henri Poincaré, 13: 1711–1717. Heisenberg, W., 1925, “Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen”, Zeitschrift für Physik, 33: 879–893. –––, 1926, “Quantenmechanik”, Die Naturwissenschaften, 14: 899–894. –––, 1927, “Ueber den anschaulichen Inhalt der quantentheoretischen Kinematik and Mechanik”, Zeitschrift für Physik, 43: 172–198. English translation in Wheeler and Zurek 1983: 62–84. –––, 1927, “Ueber die Grundprincipien der ‘Quantenmechanik’ “ Forschungen und Fortschritte, 3: 83. –––, 1928, “Erkenntnistheoretische Probleme der modernen Physik”, in Heisenberg 1984: 22–28. –––, 1930, Die Physikalischen Prinzipien der Quantenmechanik, Leipzig: Hirzel. English translation The Physical Principles of Quantum Theory, Chicago: University of Chicago Press, 1930. –––, 1931, “Die Rolle der Unbestimmtheitsrelationen in der modernen Physik”, Monatshefte für Mathematik und Physik, 38: 365–372. –––, 1958, Physics and Philosophy, New York: Harper. –––, 1969, Der Teil und das Ganze, München : Piper. –––, 1975, “Bemerkungen über die Entstehung der Unbestimmtheitsrelation”, Physikalische Blätter, 31: 193–196. Translation in Price and Chissick, 1977. –––, 1984, Gesammelte Werke, Volume C1, W. Blum, H.-P. Dürr, and H. Rechenberg (eds), München: Piper. Hilgevoord, J., 1996, “The uncertainty principle for energy and time I”, American Journal of Physics, 64: 1451–1456. –––, 1998, “The uncertainty principle for energy and time II”, American Journal of Physics, 66: 396–402. –––, 2002, “Time in quantum mechanics”, American Journal of Physics, 70: 301–306. –––, 2005, “Time in quantum mechanics: a story of confusion. Studies in History and Philosophy of Modern Physics, 36: 29–60. Hilgevoord, J. and D. Atkinson, 2011, “Time in quantum mechanics”, in The Oxford Handbook of Philosophy of Time, C. Callender (ed.), Oxford: Oxford University Press, pp. 647–662. Hilgevoord, J. and J. Uffink, 1988, “The mathematical expression of the uncertainty principle”, in Microphysical Reality and Quantum Description, A. van der Merwe et al. (eds.), Dordrecht: Kluwer, pp. 91–114. –––, 1990, “A new view on the uncertainty principle”, in Sixty-Two years of Uncertainty, Historical and Physical Inquiries into the Foundations of Quantum Mechanics, A.E. Miller (ed.), New York: Plenum, pp. 121–139. –––, 1991, “Uncertainty in prediction and inference”, Foundations of Physics, 21: 323–341. Honner, J., 1987, The Description of Nature: Niels Bohr and The Philosophy of Quantum Physics, Oxford: Clarendon Press. Jammer, M., 1974, The Philosophy of Quantum Mechanics, New York: Wiley. Jordan, P., 1927, “Über eine neue Begründung der Quantenmechanik II”, Zeitschrift für Physik, 44: 1–25. Kaiser, H., S.A. Werner, and E.A. George, 1983, “Direct measurement of the longitudinal coherence length of a thermal neutron beam”, Physical Review Letters, 50: 560. Kennard, E.H., 1927, “Zur Quantenmechanik einfacher Bewegungstypen”, Zeitschrift für Physik, 44: 326–352. Landau, H.J. and H.O. Pollak, 1961, “Prolate spheroidal wave functions; Fourier analysis and uncertainty II”, Bell Systems Technical Journal, 40: 63–84. Maassen, H. and J. Uffink, 1988, “Generalized entropic uncertainty relations”, Physical Review Letters, 60: 1103–1106. Miller, A.I., 1982, “Redefining Anschaulichkeit”, in: A. Shimony and H.Feshbach (eds) Physics as Natural Philosophy, Cambridge, MA: MIT Press. Moore, W., 1989, Schrödinger, Life and Thought, Cambridge: Cambridge University Press, p. 221. Muga, J.G., R. Sala Mayato, and I.L. Egusquiza (eds.), 2002, Time in quantum mechanics, Berlin: Springer. Muller, F.A., 1997, “The equivalence myth of quantum mechanics”, Studies in History and Philosophy of Modern Physics, 28: 35–61, 219–247; ibid. 30(1999): 543–545. Murdoch, D., 1987, Niels Bohr’s Philosophy of Physics, Cambridge: Cambridge University Press. Nairz, O., M. Andt, and A. Zeilinger, 2002, “Experimental verification of the Heisenberg uncertainty principle for fullerene molecules”, Physical Review A, 65, 032109. doi:10.1103/PhysRevA.65.032109 Ozawa, M., 2003, “Universally valid formulation of the Heisenberg uncertainty relation on noise and disturbance in measurement. Physical Review A, 67: 042105. Pashby, T., 2015, “Time and quantum theory: A history and a prospectus”, Studies in History and Philosophy of Modern Physics, 52: 24–38. Pauli, W., 1979, Wissentschaftlicher Briefwechsel mit Bohr, Einstein, Heisenberg u.a., Volume 1 (1919–1929) A. Hermann, K. von Meyenn and V.F. Weiskopf (eds) Berlin: Springer. Popper, K., 1967, “Quantum mechanics without ‘the observer’”, in Quantum Theory and Reality, M. Bunge (ed.), Berlin: Springer. Price, W.C. and S.S. Chissick (eds), 1977, The Uncertainty Principle and the Foundations of Quantum Mechanics, New York: Wiley. Regt, H. de, 1997, “Erwin Schrödinger, Anschaulichkeit, and quantum theory”, Studies in History and Philosophy of Modern Physics, 28: 461–481. Robertson, H.P., 1929, “The uncertainty principle”, Physical Review, 34: 573–574; reprinted in Wheeler and Zurek 1983: 127–128. Scheibe, E., 1973, The Logical Analysis of Quantum Mechanics, Oxford: Pergamon Press. Uffink, J., 1985, “Verification of the uncertainty principle in neutron interferometry”, Physics Letters, 108 A: 59–62. –––, 1990, Measures of Uncertainty and the Uncertainty Principle, Ph.D. thesis, University of Utrecht, available online with online errata . –––, 1993, “The rate of evolution of a quantum state”, American Journal of Physics, 61: 935–936. –––, 1994, “The joint measurement problem”, International Journal of Theoretical Physics, 33: 199–212. Uffink, J. and J. Hilgevoord, 1985, “Uncertainty principle and uncertainty relations”, Foundations of Physics, 15: 925–944. von Neumann, J., 1932, Mathematische Grundlagen der Quantenmechanik, Berlin: J. Springer. Wheeler, J.A. and W.H. Zurek (eds), 1983, Quantum Theory and Measurement, Princeton, NJ: Princeton University Press. Academic Tools
Quantum mechanics
What did Heike Kamerlingh-Onnes discover?
The-History-of-the-Atom - Werner Heisenberg Werner Heisenberg Export (PDF)  Werner Heisenberg Werner was born on December 5, 1901 in Würzburg,Germany. Unfortunately, he passed away on February 1, 1976 of cancer. He attended the University of Munich, in Germany, to study physics. Using his knowledge, he created matrix mechanics, the first version of quantum mechanics in 1925. After leaving the University of Munich in 1923, he ventured to Gottingen with Max Born to study, then to the Institute of Theoretical Physics in Copenhagen with Niels Bohr. He mainly studied physics, and was actually appointed the Professor of Physics at the University of Munich in 1958. Werner soon became interested in plasma physics, atomic physics, and thermonuclear processes. One of his most memorable discoveries is the Uncertainty Principle. He said this means that electrons do NOT travel in neat orbits. Also, all electrons that contain photons will then change momentum and physics. Werner's contribution to the atomic theory was that he calculated the behavior of electrons, and subatomic particles that also make up an atom. Instead of focusing mainly on scientific terms, this idea brought mathematics more into understanding the patterns of an atom's electrons. Werner's discovery helped clarify the modern view of the atom because scientists can compare the actually few numbers of atoms there are, by their movements of electrons, and how many electrons an atom contains. Surrounding the outside of an atomic nucleus is an electron cloud, which is a name given to the electrons that are widely spreading and moving around. In conclusion, Werner Heisenberg contributed to the atomic theory by including quantum mechanics, the branch of mechanics, based on quantum theory, used for interpretating the behavior of elementary particles and atoms. This model shows a less comlex version of what an atom looks like. Werner noticed behaviors in the electrons that make them alike, and also looked at the path in which they orbit the atomic nucleus. For more information, Visit: "Quantum Mechanics | Define Quantum Mechanics at Dictionary.com." Dictionary.com | Find the Meanings and Definitions of Words at Dictionary.com. Web. 30 Nov. 2010. [[ http://dictionary.reference.com/browse/quantum mechanics?&qsrc=]]. "Atomic Magic: Werner Heisengerg." Thinkquest.org. Grolier. Web. http://library.thinkquest.org/15567/bio/heisenberg.html .
i don't know
What science is the study of missiles in motion?
ballistics Encyclopedia  >  Science and Technology  >  Physics  >  Physics ballistics ballistics (bəlĭsˈtĭks) [ key ], science of projectiles. Interior ballistics deals with the propulsion and the motion of a projectile within a gun or firing device. Its problems include the ignition and burning of the propellant powder, the pressure produced by the expanding gases, the movement of the projectile through the bore, and the designing of the barrel to resist resulting stresses and strains. Exterior ballistics is concerned with the motion of a projectile while in flight and includes the study not only of the flight path of bullets but also of bombs, rockets, and missiles. All projectiles traveling through the air are affected by wind, air resistance, and the force of gravity. These forces induce a curved path known as a trajectory. The trajectory varies with the weight and shape of the projectile, with its initial velocity, and with the angle at which it is fired. The general shape of a trajectory is that of a parabola. The total distance traveled by a projectile is known as its range. A ballistic missile in the first stage of its flight is powered and guided by rocket engines. After the engines burn out, the warhead travels in a fixed arc as does an artillery shell. In criminology the term ballistics is applied to the identification of the weapon from which a bullet was fired. Microscopic imperfections in a gun barrel make characteristic scratches and grooves on bullets fired through it, but use causes the marks a particular gun makes to change over time. See E. D. Lowry, Interior Ballistics (1968); R. C. Labile, Ballistic Materials and Penetration Mechanics (1980); A. J. Pejsa, Modern Practical Ballistics (1989); M. Denny, Their Arrows Will Darken the Sun: The Evolution and Science of Ballistics (2011). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. See more Encyclopedia articles on: Physics
Ballistics
What kind of elements are found in a pure state in nature?
ballistics Encyclopedia  >  Science and Technology  >  Physics  >  Physics ballistics ballistics (bəlĭsˈtĭks) [ key ], science of projectiles. Interior ballistics deals with the propulsion and the motion of a projectile within a gun or firing device. Its problems include the ignition and burning of the propellant powder, the pressure produced by the expanding gases, the movement of the projectile through the bore, and the designing of the barrel to resist resulting stresses and strains. Exterior ballistics is concerned with the motion of a projectile while in flight and includes the study not only of the flight path of bullets but also of bombs, rockets, and missiles. All projectiles traveling through the air are affected by wind, air resistance, and the force of gravity. These forces induce a curved path known as a trajectory. The trajectory varies with the weight and shape of the projectile, with its initial velocity, and with the angle at which it is fired. The general shape of a trajectory is that of a parabola. The total distance traveled by a projectile is known as its range. A ballistic missile in the first stage of its flight is powered and guided by rocket engines. After the engines burn out, the warhead travels in a fixed arc as does an artillery shell. In criminology the term ballistics is applied to the identification of the weapon from which a bullet was fired. Microscopic imperfections in a gun barrel make characteristic scratches and grooves on bullets fired through it, but use causes the marks a particular gun makes to change over time. See E. D. Lowry, Interior Ballistics (1968); R. C. Labile, Ballistic Materials and Penetration Mechanics (1980); A. J. Pejsa, Modern Practical Ballistics (1989); M. Denny, Their Arrows Will Darken the Sun: The Evolution and Science of Ballistics (2011). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. See more Encyclopedia articles on: Physics
i don't know
"What are classified by their measurement in degrees as ""right"", ""reflex"", ""obtuse"", or ""acute""?"
Angles - Acute, Obtuse, Straight and Right Angles An angle measures the amount of turn Names of Angles As the Angle Increases, the Name Changes: Type of Angle an angle that is greater than 90° but less than 180° an angle that is greater than 180° Try It Yourself: This diagram might make it easier to remember: Also: Acute, Obtuse and Reflex are in alphabetical order.   Also: the letter "A" has an acute angle. Be Careful What You Measure This is an Obtuse Angle And this is a Reflex Angle   But the lines are the same ... so when naming the angles make sure that you know which angle is being asked for! Positive and Negative Angles When measuring from a line: a positive angle goes counterclockwise (opposite direction that clocks go) a negative angle goes clockwise Example: −67° The corner point of an angle is called the vertex And the two straight sides are called arms The angle is the amount of turn between each arm.   How to Label Angles There are two main ways to label angles: 1. give the angle a name, usually a lower-case letter like a or b, or sometimes a Greek letter like α (alpha) or θ (theta) 2. or by the three letters on the shape that define the angle, with the middle letter being where the angle actually is (its vertex). Example angle "a" is "BAC", and angle "θ" is "BCD"    
Angles
What is the name given to the longest side of a right-angled triangle?
Angles Year 7 Interactive Maths - Second Edition Angles Angles are used in daily life.  Engineers and architects use angles for designs, roads, buildings and sporting facilities.  Athletes use angles to enhance their performance.  Carpenters use angles to make chairs, tables and sofas.  Artists use their knowledge of angles to sketch portraits and paintings. If two lines meet (or intersect) at a point, then an angle is formed.  The point of intersection of the lines is called the vertex. An angle is shown below. Lines AB and AC meet at the point A to form an angle.  The point A is the vertex of the angle, and the lines that meet to make the angle are called the arms of the angle. Naming Angles Size of an Angle The amount of turn from one arm of the angle to the other is said to be the size of an angle. The size of an angle is measured in degrees; and the symbol used to represent degree is �.  There are 360� in a full turn (or circle).   Note: A degree is defined such that the angle of one full turn (or circle) is 360 degrees. Measuring Angles A protractor is used to measure angles.  In this section, we will consider the use of a protractor that has the shape of a semi-circle and two scales marked from 0� to 180�. The two scales make it easy for us to measure angles facing different ways. To measure the size of angle ABC, place the protractor over the angle so that the centre of the protractor is directly over the angle's vertex, B; and the base line of the protractor is along the arm, BA, of the angle. We use the inner scale to measure the angle ABC, as the arm AB passes through the zero of the inner scale.  Following the inner scale around the protractor, we find that the other arm, BC, passes through the inner scale at 60�.  So, the size of angle ABC is 60 degrees. We write this as follows: To measure the size of angle PQR, place the protractor over the angle so that the centre of the protractor is directly over the angle's vertex, Q; and the base line of the protractor is along the arm, PQ, of the angle. We use the outer scale to measure the angle PQR, as the arm PQ passes through the zero of the outer scale.  Following the outer scale around the protractor, we find that the other arm, QR, passes through the outer scale at 120�.  So, the size of angle PQR is 120 degrees.  We write this as follows: Types of Angles
i don't know
Which Swedish scientist had a temperature scale named after him?
Anders Celsius - Centigrade Scale and Thermometer By Mary Bellis In 1742, Swedish astronomer, Anders Celsius invented the Celsius temperature scale, which was named after the inventor. Celsius Temperature Scale The Celsius temperature scale is also referred to as the centigrade scale. Centigrade means "consisting of or divided into 100 degrees". The Celsius scale, invented by Swedish Astronomer Anders Celsius (1701-1744), has 100 degrees between the freezing point (0 C) and boiling point (100 C) of pure water at sea level air pressure. The term "Celsius" was adopted in 1948 by an international conference on weights and measures. Anders Celsius Anders Celsius was born in Uppsala, Sweden in 1701, where he succeeded his father as professor of astronomy in 1730. It was there that he built Sweden's first observatory in 1741, the Uppsala Observatory, where he was appointed director. He devised the centigrade scale or "Celsius scale" of temperature in 1742. He was also noted for his promotion of the Gregorian calendar, and his observations of the aurora borealis. continue reading below our video 5 Best Places to Retire In 1733, his collection of 316 observations of the aurora borealis was published and in 1737 he took part in the French expedition sent to measure one degree of meridian in the polar regions. In 1741, he directed the building of Sweden's first observatory. One of the major questions of that time was the shape of the Earth. Isaac Newton had proposed that the Earth was not completely spherical, but rather flattened at the poles. Cartographic measuring in France suggested that it was the other way around - the Earth was elongated at the poles. In 1735, one expedition sailed to Ecuador in South America, and another expedition traveled to Northern Sweden. Celsius was the only professional astronomer on that expedition. Their measurements seemed to indicate that the Earth actually was flattened at the poles. Anders Celsius was not only an inventor and astronomer, but also a physicist. He and an assistant discovered that the aurora borealis had an influence on compass needles. However, the thing that made him famous is his temperature scale, which he based on the boiling and melting points of water. This scale, an inverted form of Celsius' original design, was adopted as the standard and is used in almost all scientific work. Anders Celsius died in 1744, at the age of 42. He had started many other research projects, but finished few of them. Among his papers was a draft of a science fiction novel, situated partly on the star Sirius.
Anders Celsius
How many colors are there in the spectrum when white light is separated?
Daniel Gabriel Fahrenheit and the Measurement of Temperature - yovisto Blogyovisto Blog You are here: Home › physics › Daniel Gabriel Fahrenheit and the Measurement of Temperature Daniel Gabriel Fahrenheit and the Measurement of Temperature Daniel Gabriel Fahrenheit (1686 – 1736) On May 24, 1686, Dutch-German-Polish physicist, engineer, and glass blower Daniel Gabriel Fahrenheit was born. He is is best known for his invention of the mercury-in-glass thermometer in 1714, and for developing a temperature scale that is now named after him. During his apprenticeship in Amsterdam, Daniel Fahrenheit began building instruments and traveled through Europe, meeting and exchanging knowledge with contemporary instrument makers. By 1714, he built his first thermometers containing alcohol, which he later changed to mercury and already made use of a new scale standard even though it did not catch on in the scientific community yet. Furtherly, Daniel Fahrenheit began experimenting with the different properties of water. Based on previous works of Ole Rømer and his scale, he investigated the boiling point of water while changing the pressure. Also, he managed to discover the ability of supercooling water, meaning that water can be cooled below its freezing point without actually freezing. With these new findings, Fahrenheit began to question the general reliability of freezing and boiling points of fluids and developed his temperature scale, ranging from 0 to 212. He noted that the zero point on his scale was the temperature of ice melting in a salt water solution and 32 degrees depicted the temperature of ice melting in clear water. Fahrenheit began building thermometers that became more and more popular. He decided that a cylinder shaped bulb would be more efficient and made further improvements, which he kept secret for almost 18 years. Despite Daniel Fahrenheit’s success with building and distributing thermometers as well as his research on the Fahrenheit scale, the Celsius scale named after the Swedish scientist Anders Celsius slowly replaced Fahrenheit’s scale during the metrication process. It is still used in the U.S., some territories of Puerto Rico, and Belize in everyday life, while scientists throughout the world mostly use Celsius or Kelvin. References and Further Reading:
i don't know
The discovery of which law provoked the surprised cry 'Eureka!'?
PHYSICS FOLIO |authorSTREAM PHYSICS FOLIO   Does not support media & animations   Automatically changes to Flash or non-Flash embed   The presentation is successfully added In Your Favorites . Views:   This Presentation is Public   Favorites:  0 ENTRY NO. 1: Physics A Natural Philosophy: ENTRY NO. 1: Physics A Natural Philosophy ENTRY NO. 1: Physics A Natural Philosophy Physics A Natural Philosophy: Physics A Natural Philosophy Physics as natural philosophy. Physics is present in our environment at all times. The gravitational theory is the perfect explanation for this natural philosophy. Gravitation, or gravity, is a natural phenomenon by which physical bodies attract with a force proportional to their mass. In everyday life, gravitation is most familiar as the agent that gives weight to objects with mass and causes them to fall to the ground when dropped. ENTRY NO.2 PHYSICS REALLY WORKS: ENTRY NO.2 PHYSICS REALLY WORKS Physics Really Works: Physics Really Works Physics really works. A branch of physics, which is electromagnetism, can be seen in modern inventions. Electromagnetism is the force that causes the interaction between electrically charged particles; the areas in which this happens are called electromagnetic fields. This is the set-up in a MICROPHONE. sound waves enter the microphone, which cause the microphone to vibrate back and forth. This subsequently causes the wire loops to oscillate, and the magnetic field through the plane of the loops subsequently changes, inducing a current in the wires. In this way sound or wave energy is converted into electrical energy. ENTRY NO. 3: TESTING TIME : ENTRY NO. 3: TESTING TIME PHYSICS TRIVIAS:: PHYSICS TRIVIAS: Q:  What did scientists build in a squash court under a football stadium at the University of Chicago in 1942? A:  A nuclear reactor. Slide 8: Q: Which Swedish scientist had a temperature scale named after him? A: Anders Celsius. Slide 9: Q :What is the term used to denote the tendency of an object to remain in a state of rest until acted upon by an external force? A: Inertia. Slide 10: Q: The discovery of which law provoked the surprised cry 'Eureka!'? A: Archimedes Principle. Slide 11: Q: Which electronic device magnifies the strength of a signal? A: Amplifier. Slide 12: Q :What is an unchanging position in which forces cancel each other out? A: Equilibrium. Slide 13: Q: What was the name of the unit of heat now replaced by the joule? A: Calorie. Slide 14: Q: What is described as an ionized gas with approximately equal numbers of positive and negative charges? A: Plasma. ENTRY NO. 4: EUREKA EXPERIENCE: ENTRY NO. 4: EUREKA EXPERIENCE My Eureka Experience..: My Eureka Experience.. I have learned a lot of things in my fourth year life now. One of those lessons I’ve learned was the accuracy of measurement. Once we where able to measure using the Vernier caliper. First it was so hard because I’m not familiar with it but Aldrin teach me and now I know how to read its measurement. Eureka! Eureka! ENTRY NO. 5: MEDIA HYPE: ENTRY NO. 5: MEDIA HYPE Media Hype: Media Hype The advertisement of the shampoos and conditioners are one of the best example of media hype. The commercial says that you will have a smoother and healthier hair just like when you go to the parlor. These models and endorsers have some hair treatment before the shoot and some are made by the camera tricks for the commercial so that it will look more beautiful and real. Slide 21: Commercials told us that their products can make our clothes whiter from having dirt after we washed them. They illustrate their product and another product and compared it from one another. The t-shirt that been soaked in their product was much whiter than the other but it looks like it’s a new one and not the original one. Old Quill to Ordinary pen and now. Pen with USB.. At the early age they used the old quill just like Jose Rizal and now there's a lot of hi-tech pens. Pens with camera, with USB or sometimes made from gold.: Old Quill to Ordinary pen and now. Pen with USB.. At the early age they used the old quill just like Jose Rizal and now there's a lot of hi-tech pens. Pens with camera, with USB or sometimes made from gold. Slide 24: Even Electric Fan evolved. First we have a fan made from abaca then someone invented the electric fan for convenience and then because of technology, air conditioners are now existing. Slide 25:
Archimedes' principle
What is the study and use of frequencies above 20 khz?
PHYSICS FOLIO |authorSTREAM PHYSICS FOLIO   Does not support media & animations   Automatically changes to Flash or non-Flash embed   The presentation is successfully added In Your Favorites . Views:   This Presentation is Public   Favorites:  0 ENTRY NO. 1: Physics A Natural Philosophy: ENTRY NO. 1: Physics A Natural Philosophy ENTRY NO. 1: Physics A Natural Philosophy Physics A Natural Philosophy: Physics A Natural Philosophy Physics as natural philosophy. Physics is present in our environment at all times. The gravitational theory is the perfect explanation for this natural philosophy. Gravitation, or gravity, is a natural phenomenon by which physical bodies attract with a force proportional to their mass. In everyday life, gravitation is most familiar as the agent that gives weight to objects with mass and causes them to fall to the ground when dropped. ENTRY NO.2 PHYSICS REALLY WORKS: ENTRY NO.2 PHYSICS REALLY WORKS Physics Really Works: Physics Really Works Physics really works. A branch of physics, which is electromagnetism, can be seen in modern inventions. Electromagnetism is the force that causes the interaction between electrically charged particles; the areas in which this happens are called electromagnetic fields. This is the set-up in a MICROPHONE. sound waves enter the microphone, which cause the microphone to vibrate back and forth. This subsequently causes the wire loops to oscillate, and the magnetic field through the plane of the loops subsequently changes, inducing a current in the wires. In this way sound or wave energy is converted into electrical energy. ENTRY NO. 3: TESTING TIME : ENTRY NO. 3: TESTING TIME PHYSICS TRIVIAS:: PHYSICS TRIVIAS: Q:  What did scientists build in a squash court under a football stadium at the University of Chicago in 1942? A:  A nuclear reactor. Slide 8: Q: Which Swedish scientist had a temperature scale named after him? A: Anders Celsius. Slide 9: Q :What is the term used to denote the tendency of an object to remain in a state of rest until acted upon by an external force? A: Inertia. Slide 10: Q: The discovery of which law provoked the surprised cry 'Eureka!'? A: Archimedes Principle. Slide 11: Q: Which electronic device magnifies the strength of a signal? A: Amplifier. Slide 12: Q :What is an unchanging position in which forces cancel each other out? A: Equilibrium. Slide 13: Q: What was the name of the unit of heat now replaced by the joule? A: Calorie. Slide 14: Q: What is described as an ionized gas with approximately equal numbers of positive and negative charges? A: Plasma. ENTRY NO. 4: EUREKA EXPERIENCE: ENTRY NO. 4: EUREKA EXPERIENCE My Eureka Experience..: My Eureka Experience.. I have learned a lot of things in my fourth year life now. One of those lessons I’ve learned was the accuracy of measurement. Once we where able to measure using the Vernier caliper. First it was so hard because I’m not familiar with it but Aldrin teach me and now I know how to read its measurement. Eureka! Eureka! ENTRY NO. 5: MEDIA HYPE: ENTRY NO. 5: MEDIA HYPE Media Hype: Media Hype The advertisement of the shampoos and conditioners are one of the best example of media hype. The commercial says that you will have a smoother and healthier hair just like when you go to the parlor. These models and endorsers have some hair treatment before the shoot and some are made by the camera tricks for the commercial so that it will look more beautiful and real. Slide 21: Commercials told us that their products can make our clothes whiter from having dirt after we washed them. They illustrate their product and another product and compared it from one another. The t-shirt that been soaked in their product was much whiter than the other but it looks like it’s a new one and not the original one. Old Quill to Ordinary pen and now. Pen with USB.. At the early age they used the old quill just like Jose Rizal and now there's a lot of hi-tech pens. Pens with camera, with USB or sometimes made from gold.: Old Quill to Ordinary pen and now. Pen with USB.. At the early age they used the old quill just like Jose Rizal and now there's a lot of hi-tech pens. Pens with camera, with USB or sometimes made from gold. Slide 24: Even Electric Fan evolved. First we have a fan made from abaca then someone invented the electric fan for convenience and then because of technology, air conditioners are now existing. Slide 25:
i don't know
What is an unchanging position in which forces cancel each other out?
Equilibria - definition of equilibria by The Free Dictionary Equilibria - definition of equilibria by The Free Dictionary http://www.thefreedictionary.com/equilibria Related to equilibria: Punctuated equilibria e·qui·lib·ri·um  (ē′kwə-lĭb′rē-əm, ĕk′wə-) n. pl. e·qui·lib·ri·ums or e·qui·lib·ri·a (-rē-ə) 1. A condition in which all acting influences are canceled by others, resulting in a stable, balanced, or unchanging system. 2. Mental or emotional balance. 3. Physics The state of a body or physical system at rest or in unaccelerated motion in which the resultant of all forces acting on it is zero and the sum of all torques about any axis is zero. 4. Chemistry a. The state of a chemical reaction in which its forward and reverse reactions occur at equal rates so that the concentration of the reactants and products does not change with time. b. The state of a system in which more than one phase exists and exchange between phases occurs at equal rates so that there is no net change in the composition of the system. [Latin aequilībrium : aequi-, equi- + lībra, balance.] equilibrium n, pl -riums or -ria (-rɪə) 1. a stable condition in which forces cancel one another 2. a state or feeling of mental balance; composure 3. (General Physics) any unchanging condition or state of a body, system, etc, resulting from the balance or cancelling out of the influences or processes to which it is subjected. See thermodynamic equilibrium 4. (General Physics) physics a state of rest or uniform motion in which there is no resultant force on a body 5. (Chemistry) chem the condition existing when a chemical reaction and its reverse reaction take place at equal rates 6. (General Physics) physics the condition of a system that has its total energy distributed among its component parts in the statistically most probable manner 7. (Physiology) physiol a state of bodily balance, maintained primarily by special receptors in the inner ear 8. (Economics) the economic condition in which there is neither excess demand nor excess supply in a market [C17: from Latin aequilībrium, from aequi- equi- + lībra pound, balance] e•qui•lib•ri•um (ˌi kwəˈlɪb ri əm, ˌɛk wə-) n., pl. -ri•ums, -ri•a (-ri ə) 1. a state of rest or balance due to the equal action of opposing forces. 2. equal balance between any powers, influences, etc.; equality of effect. 3. mental or emotional balance; equanimity. 4. a state or sense of steadiness and proper orientation of the body. 5. the condition existing when a chemical reaction and its reverse reaction proceed at equal rates. [1600–10; < Latin aequilībrium=aequi- equi - + lībr(a) balance] e•quil′i•bra•to`ry (ɪˈkwɪl ə brəˌtɔr i, -ˌtoʊr i) adj. e·qui·lib·ri·um (ē′kwə-lĭb′rē-əm) 1. Physics The state of a body or physical system that is at rest or in constant and unchanging motion. The sum of all forces acting on a body that is in equilibrium is zero (because opposing forces balance each other). A system that is in equilibrium shows no tendency to alter over time. 2. Chemistry The state of a reversible chemical reaction in which its forward and reverse reactions occur at equal rates so that the concentration of the reactants and products remains the same. equilibrium The state of a reversible chemical reaction at which the forward and backward reactions take place at the same rate ThesaurusAntonymsRelated WordsSynonymsLegend:
Equilibrium
Which physicist's law states that equal volumes of all gases, measured at the same temperature and pressure, contain the same number of molecules?
Equilibrium - definition of equilibrium by The Free Dictionary Equilibrium - definition of equilibrium by The Free Dictionary http://www.thefreedictionary.com/equilibrium Related to equilibrium: Equilibrium of forces e·qui·lib·ri·um  (ē′kwə-lĭb′rē-əm, ĕk′wə-) n. pl. e·qui·lib·ri·ums or e·qui·lib·ri·a (-rē-ə) 1. A condition in which all acting influences are canceled by others, resulting in a stable, balanced, or unchanging system. 2. Mental or emotional balance. 3. Physics The state of a body or physical system at rest or in unaccelerated motion in which the resultant of all forces acting on it is zero and the sum of all torques about any axis is zero. 4. Chemistry a. The state of a chemical reaction in which its forward and reverse reactions occur at equal rates so that the concentration of the reactants and products does not change with time. b. The state of a system in which more than one phase exists and exchange between phases occurs at equal rates so that there is no net change in the composition of the system. [Latin aequilībrium : aequi-, equi- + lībra, balance.] equilibrium n, pl -riums or -ria (-rɪə) 1. a stable condition in which forces cancel one another 2. a state or feeling of mental balance; composure 3. (General Physics) any unchanging condition or state of a body, system, etc, resulting from the balance or cancelling out of the influences or processes to which it is subjected. See thermodynamic equilibrium 4. (General Physics) physics a state of rest or uniform motion in which there is no resultant force on a body 5. (Chemistry) chem the condition existing when a chemical reaction and its reverse reaction take place at equal rates 6. (General Physics) physics the condition of a system that has its total energy distributed among its component parts in the statistically most probable manner 7. (Physiology) physiol a state of bodily balance, maintained primarily by special receptors in the inner ear 8. (Economics) the economic condition in which there is neither excess demand nor excess supply in a market [C17: from Latin aequilībrium, from aequi- equi- + lībra pound, balance] e•qui•lib•ri•um (ˌi kwəˈlɪb ri əm, ˌɛk wə-) n., pl. -ri•ums, -ri•a (-ri ə) 1. a state of rest or balance due to the equal action of opposing forces. 2. equal balance between any powers, influences, etc.; equality of effect. 3. mental or emotional balance; equanimity. 4. a state or sense of steadiness and proper orientation of the body. 5. the condition existing when a chemical reaction and its reverse reaction proceed at equal rates. [1600–10; < Latin aequilībrium=aequi- equi - + lībr(a) balance] e•quil′i•bra•to`ry (ɪˈkwɪl ə brəˌtɔr i, -ˌtoʊr i) adj. e·qui·lib·ri·um (ē′kwə-lĭb′rē-əm) 1. Physics The state of a body or physical system that is at rest or in constant and unchanging motion. The sum of all forces acting on a body that is in equilibrium is zero (because opposing forces balance each other). A system that is in equilibrium shows no tendency to alter over time. 2. Chemistry The state of a reversible chemical reaction in which its forward and reverse reactions occur at equal rates so that the concentration of the reactants and products remains the same. equilibrium The state of a reversible chemical reaction at which the forward and backward reactions take place at the same rate ThesaurusAntonymsRelated WordsSynonymsLegend:
i don't know
What is the ability of fluids to offer resistance to flow?
Characteristics of Fluids Characteristics of Fluids The principal difference in the mechanical behavior of fluids compared to solids is that when a shear stress is applied to a fluid it experiences a continuing and permanent distortion. Fluids offer no permanent resistance to shearing, and they have elastic properties only under direct compression: in contrast to solids which have all three elastic moduli, fluids possess a bulk modulus only. Thus, a fluid can be defined unambiguously as a material that deforms continuously and permanently under the application of a shearing stress, no matter how small. This definition does not address the issue of how fast the deformation occurs and as we shall see later this rate is dependent on many factors including the properties of the fluid itself. The inability of fluids to resist shearing stress gives them their characteristic ability to change shape or to flow; their inability to support tension stress is an engineering assumption, but it is a well-justified assumption because such stresses, which depend on intermolecular cohesion, are usually extremely small..... Because fluids cannot support shearing stresses, it does not follow that such stresses are nonexistent in fluids. During the flow of real fluids, the shearing stresses assume an important role, and their prediction is a vital part of engineering work. Without flow, however, shearing stresses cannot exist, and compression stress or pressure is the only stress to be considered (Elementary Fluid Mechanics, 7th edition, by R.L. Street, G.Z. Watters and J.K. Vennard, John Wiley \& Sons, 1996). So we see that the most obvious property of fluids, their ability to flow and change their shape, is precisely a result of their inability to support shearing stresses. Flow is possible without a shear stress, since differences in pressure will cause a fluid lump to experience a resultant force and produce an acceleration, but when a fluid is deforming its shape, shearing stresses must be present. With this definition of a fluid, we can recognize that certain materials that look like solids are actually fluids. Tar, for example, is sold in barrel-sized chunks which appear at first sight to be the solid phase of the liquid which forms when the tar is heated. However, cold tar is also a fluid. If a brick is placed on top of an open barrel of tar, we will see it very slowly settle into the tar. It will continue to settle as time goes by --- the tar continues to deform under the applied load --- and eventually the brick will be engulfed by the tar. Even then it will continue to move downwards until it reaches the bottom of the barrel. Glass is another substance that appears to be solid, but is actually a fluid. The glass flows under the action of its own weight. If you measure the thickness of a very old glass pane you would find it to be larger at the bottom than at the top of the pane. This deformation happens very slowly because the glass has a very high viscosity, and the results can take centuries to become obvious. Another example: silly putty behaves like an elastic body when subject to rapid stress (it bounces like a ball) but it has fluid behavior under a slowly acting stress (it flows like a fluid under its own weight).
Viscosity
What is described as an ionized gas with approximately equal numbers of positive and negative charges?
Viscosity Modifiers for Hydraulic Fluids - Lubrizol Hydraulic Fluids Hydraulic Fluids Multi-viscosity or multi-grade hydraulic fluids are widely used in mobile equipment operating in all-season use because the ambient temperature varies outside the recommended range of a single grade conventional fluid. Some examples are tracked excavators, wheel loaders, municipal waste trucks, snow plows, utility company lift trucks, and hydraulic cranes. Multi-grade hydraulic fluids contain a viscosity modifier (VM) and a pour point depressant (PPD) in addition to the hydraulic performance package. The VM raises the fluid’s viscosity index (VI) and helps the fluid viscosity remain more constant over a wide temperature range while the PPD controls wax crystallization ensuring the lubricant continues to flow at cold temperatures. High performance multi-grade fluids offer better wear protection, greater operating efficiency and smoother hydraulic system response, year-around, under the toughest conditions. Performance factors are important when choosing a VM for multi-grade hydraulic fluid. Shear stability or resistance to viscosity loss in service  Viscosity Index increase  Water separation ability  Other performance common to high quality hydraulic fluids Contact your Lubrizol representative to learn more about differentiating your hydraulic fluid brands with market leading performance with the help of Lubrizol viscosity modifiers. 
i don't know
What name is given to the very serious chain of events which can follow the failure of the cooling system in a nuclear reactor?
THE FEARSOME REACTOR MELTDOWN ACCIDENT next=> THE FEARSOME REACTOR MELTDOWN ACCIDENT Technologies are normally developed by entrepreneurs whose primary goal is making money. If the technology is successful, the entrepreneurs prosper as a new industry develops and thrives. In the process, the environmental impacts of this new technology are the least of their concerns. Only after the public revolts against the pollution inflicted upon it does the issue of the environment come into the picture. At that point an adversarial relationship may develop, with the government serving to protect the public at the expense of the industry. Coal-burning technologies have been an excellent example of this development process. With nuclear energy, everything was to be entirely different. It was conceived and brought into being by the world's greatest scientists. They banded together to obtain government support; the highly publicized letter from Albert Einstein to President Roosevelt in 1941 was a key element in that process. Their motivation was entirely idealistic. None of them thought about making money, and there was no mechanism for them to do so. Their first objective was to save the world from the hideous Hitler, and after World War II it was to protect freedom and democracy through military strength. But from the beginning of the project in the early 1940s, the scientists always felt strongly that this new technology, developed at government expense, would provide great benefits to mankind. Distinguished scientists like Henry Smythe and Glenn Seaborg held high positions of power all the way up until the early 1970s, and through them many of the greatest and most idealistic scientists, like Enrico Fermi, Eugene Wigner, and Hans Bethe, exerted great influence on the course of events. Directly or indirectly, hundreds of scientists were involved in guiding our national nuclear energy program. They set up national laboratories of unprecedented size in the New York (Brookhaven), Chicago (Argonne), and San Francisco (Berkeley, Livermore) areas and at the wartime development sites in Oak Ridge, Tennessee and Los Alamos, New Mexico. They arranged for an unprecedented level of financial support for research in universities where most of the scientists were based. Their objectives went far beyond development of nuclear technology, and included seeking a thorough understanding of the environmental effects. Their approach ran the gamut from the most basic research to the most practical applications. The government's side of this enterprise was run by the Atomic Energy Commission (AEC). The AEC was set up at the behest of scientists to remove nuclear energy development from military control. Prominent scientists served as commissioners, often as chairmen. It's General Advisory Committee, made up of some of the nation's most distinguished scientists, exerted very strong influence. The AEC was monitored by the Congressional Joint Committee on Atomic Energy, which included some of the most powerful senators and representatives. A spirit of close cooperation reigned throughout. The goal of all was to provide humankind with the blessings of nuclear energy as expeditiously as possible. There was a general understanding among all concerned that the scientists had paid their dues — they had given the government's military nuclear weapons, nuclear submarines, and a host of other goodies — and that their new technology was to serve humanity under the guidance of this research enterprise. Government also recognized that this enterprise, in the long run, would serve the public interest, and continues to support it to this day. A recent well-publicized element of that program is the multibillion dollar superconducting supercollider accelerator to be constructed in Texas to study the fundamental nature of matter, with no practical applications in sight. Use of nuclear energy to generate electricity was a very important part of this research and development program. In order to promote it, the AEC brought in commercial interests beginning in the mid 1950s, but it kept the national laboratories deeply involved. One of the highest priority activities it assigned to them was investigation of the environmental impacts of nuclear power. For the first time in history, environmental impacts were thoroughly investigated before an industry started. An important part of this effort was to try to "dream up" anything and everything that can possibly go wrong in a nuclear power plant, and investigate the consequences. This was a useful process in deciding on what safety systems to include. But if enough thought and "dreaming up" is devoted to any system, one can always devise a chain of events that can defeat all safety systems and do harm to the workers or the public. Though this is true for every technology, no other technology has ever been subjected to this degree of scrutiny. These efforts to evaluate the risks of nuclear power plant accidents have been a valuable and successful scientific and engineering undertaking. Researchers broke new ground in the science of risk analysis (their developments are now being applied in other technologies). Results of this research have suggested new areas for investigation of nuclear reactor safety questions, and looking into these areas has been productive. This wide-ranging research program developed a variety of accident scenarios, calculated their consequences, and estimated their probability for occurring. However, while these efforts were highly laudable, their effects proved to be disastrous. The public did not understand these risk analyses. Its attention became entirely focussed on excerpts stating that nuclear accidents can kill tens of thousands of people. They never seemed to notice that these reports estimated that such an accident can be expected only once in 10 million years. The public doesn't understand probabilities anyhow. Most people recognize little difference between a risk with a probability of once in 10 thousand years and once in 10 million years. The common impression was that a reactor meltdown accident killing tens of thousands of people would occur every few years. In 1978, a movie called "The China Syndrome," based on this sort of thinking and starring some of Hollywood's top performers, gained widespread popularity. When the Three Mile Island accident followed in 1979, it became the news media story of the decade, complete with days of suspense during which the public was led to believe that a horrible disaster could occur at any moment. This combination of events led to very serious problems for the nuclear power industry. As a result of these developments, the word meltdown has become a household word. We will use it here, although it is no longer used by risk analysis scientists. In the mind of the public, it refers to an accident in which all of the fuel becomes so hot that it forms a molten mass which melts its way through the reactor vessel. Let's use the word in that sense. The media frequently referred to it as "the ultimate disaster," evoking images of stacks of dead bodies amid a devastated landscape, much like the aftermath of a nuclear bomb attack. On the other hand, the authors of the two principal reports on the Three Mile Island accident1, 2 agree that even if there had been a complete meltdown in that reactor, there very probably would have been essentially no harm to human health and no environmental damage. I know of no technical reports that have claimed otherwise. Moreover, all scientific studies agree that in the great majority of meltdown accidents there would be no detectable effects on human health, immediately or in later years. According to the government estimate, a meltdown would have to occur every week or so somewhere in the United States before nuclear power would be as dangerous as coal burning. Even the Chernobyl accident, which was worse in many ways than any meltdown that can be envisioned for an American reactor, caused no injuries outside the plant. That is not to say that it is impossible to have fatalities caused by a meltdown, but it is estimated that in no more than 1 in a 100 meltdowns could any be obviously related to the accident. Was the Three Mile island Accident a Near Miss to Disaster? One of the principal reasons for the discrepancy between the public's impressions and the technical analyses is that nuclear reactors are sealed inside a very powerfully built structure called the "containment." Under ordinary circumstances the containment would prevent the escape of radioactivity even if the reactor fuel were to melt completely and escape from the reactor vessel. A typical containment3,4,5 is constructed of 3-foot-thick concrete walls heavily reinforced by thick steel rods (see Fig. 1 ) welded into a tight net around which the concrete is poured. In fact, there is so much steel reinforcing that special techniques had to be developed to get the concrete to become distributed around it as it is poured. In addition, the inside of the containment is lined with thick steel plate welded to form a tight chamber which can withstand very high internal pressure, as high as 10 times normal atmospheric pressure. Fig. 1 — Construction worker on the steel-rod-reinforced containment structure of a Westinghouse reactor. Note the thickness and density of the reinforcing rods. The containment provides a broad range of protection for the reactor against external forces, such as a tornado hurling an automobile, a tree, or a house against it, an airplane flying into it, or a large charge of chemical explosive detonated against it. In a meltdown accident, however, the function of the containment is to hold the radioactive material inside. Actually, it need only do this for several hours, because there are systems inside the containment for removing the radioactivity from the atmosphere. One type blows the air through filters in an operation similar in principle to that of household vacuum cleaners. In another, water sprinklers remove the dust from the air. There are charcoal filter beds or chemical sprays for removing certain types of airborne radioactivity. Most radioactive materials, however, would simply get stuck to the walls of the building and the equipment inside, and thereby be removed from the air. Thus, if the containment holds even for several hours, the health consequences of a meltdown would be greatly mitigated. In the Three Mile Island accident, there was no threat to the containment. The investigations have therefore concluded that even if there had been a complete meltdown and the molten fuel had escaped from the reactor, the containment would very probably have prevented the escape of any large amount of radioactivity.1,2 In other words, even if the Three Mile Island accident was a "near miss" to a complete meltdown (a highly debatable point), it was definitely not a near miss to a health disaster. The Chernobyl reactor did not have a containment anything like those used in U.S. reactors. Analyses have shown, that if it had used one, virtually no radioactivity would have escaped, there would have been no threat to human health, and the world would probably have never heard about it. Roads to Meltdown Fig. 2 — Cutaway view of a Westinghouse pressurized water reactor. In order to understand the meltdown accident, we must go back to its origins. A nuclear power reactor is basically just a water heater, evolving heat from fission processes in the fuel. This heats the water surrounding the fuel (see Fig. 2 ), and the hot water is used to produce steam. The steam is then employed as in coal- or oil-fired power plants to drive a turbine which turns a generator (sometimes called a "dynamo") which produces electric power (see Fig. 3 ). There are two different types of reactors in widespread use in the United States, pressurized water reactors (PWRs)4 and boiling water reactors (BWRs)5. In the PWR, the heated water is pumped out of the reactor to separate units called "steam generators," where the heat in this water is used to produce steam. In the BWR, the steam is produced directly in the reactor so there is no need for a steam generator. There are features of the nuclear water heater that differentiate it from water heaters in our basements or the coal- or oil-fired boilers that produce steam for various purposes in industrial plants. First, the waste products from the burning do not go up a chimney or settle to the bottom as an ash, but rather are retained inside the fuel. Nuclear fuel does not crumble into ashes or get converted into a gas when burned, as do coal and oil fuels. Second, these waste products are radioactive, which means that they emit radiation. Third, because of their radioactivity, these wastes continue to heat the fuel even after the reactor is shut down6; it is therefore necessary to continue to provide some water to carry this heat away. If, for some reason, no water is available to remove this heat (called a loss-of-coolant accident, LOCA), the fuel will heat up and eventually melt.7 Fuel melting releases the radioactivity sealed inside. Some of this radioactivity would come off as airborne dust that has a potential for damaging public health if it is released into the environment. If there is some water in the reactor but not enough, the situation may be even worse, because steam reacts chemically with the fuel-casing material (an alloy of zirconium) at high temperature (2,700°F), releasing hydrogen, an inflammable and potentially explosive gas, and providing additional heat, thereby accelerating the fuel-melting process. In the Three Mile Island accident,8 the LOCA occurred as a result of a valve failing to close, while the operators were led to believe that it was closed; they had misinterpreted the information available to them from instrument readings. According to one estimate,2 a complete fuel meltdown might have occurred if the water had continued to escape through the open valve for another 30 to 60 minutes. How close was Three Mile Island to such a complete meltdown? There were many unusual aspects to the instrument readings at the time. Clearly, something very strange was going on. A number of knowledgeable people were trying to figure out what to do. One rightfully suggested closing an auxiliary valve in the pipe through which water was escaping. Within less than a minute after it was closed, a telephone call came in from another expert working at home suggesting that this auxiliary valve be closed,2 so it cannot be claimed that a meltdown was prevented by the luck of one man's recognizing the right thing to do. It is difficult to prove that if neither of the two had thought of closing the valve someone else would have, but there were a lot of people involved in analyzing the information, and there would have been further clues developing before a meltdown would have occurred. Some analyses indicate that there would not have been a complete meltdown even if the valve had not been closed, as there was a small amount of water still being pumped in. In any case, the widely publicized statement that the Three Mile Island accident came within 30 to 60 minutes of a meltdown seemed to be sufficient to scare the public. I often wonder why this is so — when we drive on a high-speed highway, on every curve we are within a few seconds of being killed if nothing is done — that is, if the steering wheel is not turned at the proper time. And don't forget that even if a meltdown had occurred, there very probably would have been no health consequences, since the radioactivity would have been contained. As a result of the Three Mile Island accident9 great improvements have been made in instrumentation, information availability to the operators, and operator training. There is now a requirement that a graduate engineer be on hand at all times. There will probably never again be a LOCA arising from faulty interpretation of instrument readings. With that road to a meltdown now essentially blocked, let us consider what are believed to be the most probable roads now open. 1. LOCA arising from a break in the reactor coolant system. The cooling water system for transferring heat out of the reactor operates at very high temperature and pressure (600°F, and 2,200 pounds per square inch (psi) in a PWR, or 1,200 psi in a BWR). Therefore, if the system should break open, the water would come shooting out as steam in a process picturesquely called blowdown. Such a break could arise from a failure in the seal of the huge pump that brings the water into the reactor, or from a pressure relief valve opened by a brief pressure surge failing to close, but the most likely cause would be a pipe breaking off, especially at a welded joint. A series of safety measures is designed to protect the system from breaking open.10 The first of these is very elaborate quality control on materials and workmanship, far superior to that in any other industry. No effort or expense is spared in choosing the highest quality materials and equipment, nor in requiring the most demanding specifications for safety-related parts of the system. The second measure is a highly elaborate inspection program, including X-ray inspection of every weld, and other inspections with magnetic particle and ultrasonic techniques during construction, followed by periodic ultrasonic and visual inspections after the reactor has gone into operation. The visual inspection program, for example, includes removal of insulation from pipes to search for imperfections or signs of cracking. One problem originally discovered by these inspections, "corrosion cracking," is discussed in the last section of this chapter. A third measure is a variety of leak detection systems: ordinarily a large break starts out as a small crack which allows some of the water and the radioactivity it contains to leak out. Leaking water becomes steam as it emerges (its temperature is close to 600°F), increasing the humidity; there are instruments installed to detect this increased humidity. Much of the radioactive material emerging with the leaking water attaches to airborne dust, and there are instruments in place for detecting increased radioactivity in this dust. These systems for detecting increased humidity and increased radioactivity in dust act as sensitive indicators for leaks, therefore serving as early warnings of possible cracks in the system. If all of these measures should fail, a LOCA would occur. The remaining protection against a meltdown would then be the emergency core cooling system to be discussed below. 2. Loss of electric power (station blackout). If there should be no electric power to operate the pumps, the water in the reactor would stay there and get hotter and hotter, building up the pressure until relief valves open allowing the water to escape. In addition, the pump seals require cooling water and would fail if it were not supplied due to station blackout, leading to a LOCA as described in (1). To protect against station blackout, off-site power is normally brought into the plant from two different directions, and several diesel-driven generators are available, any one of which could provide the needed power. These engines are started at frequent intervals to assure their availability when needed, and statistics are kept on their failures to start. In addition, some safety system pumps are driven by steam from the reactor rather than by electric power; normally there would be plenty of this steam available. Control systems for operating pumps and valves are electrically operated, but batteries are available for this purpose. In some plants, at least, being without electric power for more than about 20 minutes would usually lead to a meltdown. 3. Transients with failure of reactor protection systems. While a reactor is operating, changes can and do occur which tend to increase the power level of a reactor. For example, the temperature or pressure of the water or its chemical content may change, causing it to absorb fewer neutrons, leaving more neutrons to strike uranium atoms and thus produce more energy. Reactors have "control rods," simple rods made of a material that strongly absorbs neutrons, which are moved in or out of the reactor core to control the power level. In the above example they would be moved in a short distance to absorb more neutrons and thus compensate for the fact that the water was absorbing fewer neutrons, restoring the power to its original level. An incident of this type is called a "transient." Small transients occur frequently in normal reactor operation, and control rods are frequently moved to adjust the power of the reactor. Occasionally, perhaps once or twice a year, an abnormally large transient occurs which cannot be accommodated by the normal control rods. For example, if the electric power demand should suddenly drop drastically as in the case of a transformer or transmission line failure, the reactor is suddenly in a condition where it is producing far too much heat. For such transients, anticipated to occur many times in a reactor's "lifetime," safety systems automatically insert emergency control rods all the way into the reactor at high speed, absorbing so many neutrons that the chain reaction is completely stopped. This process is called scram. It is possible that the scram system might fail when one of these anticipated large transients occur. This is called "anticipated transients without scram," or ATWS. An ATWS event would lead to rapid, intense overheating and loss of water — blown out through pressure relief valves. This loss of water would constitute a LOCA. It would also stop the chain reaction. The protection against occurrence of an ATWS accident is in the use of high-quality materials and components, and in a good program of inspections and tests. If an ATWS does occur, the emergency core cooling system, to be discussed below, would normally prevent a meltdown. 4. Earthquakes and fires. Earthquakes can cause any of the above failures and can cause failures in safety systems which would ordinarily mitigate the effects of these failures. Fires, especially in the switch gear, in the control room, or in cables, can lead to failure of various operating or protection systems. For these reasons, nuclear plants are constructed with several features, like bracing and special pipe supports, to minimize effects of earthquakes. In addition, great care is taken in siting plants to avoid proximity to potentially active geological faults. (Widely circulated stories about plants being built on faults are not true). Some of the best earthquake scientists in the nation are involved in this activity, and regulations and procedures are very elaborate. Any system can be destroyed by a sufficiently powerful earthquake, but in an earthquake strong enough to cause a nuclear reactor meltdown, effects of the meltdown would be a relatively minor addition to the consequences of that earthquake. All of these accident scenarios lead to loss of water. The chain reaction cannot go on without water, so it is shut down, but one must still worry about heat from radioactivity causing the fuel to melt. This can only be prevented if water cooling is very rapidly restored to the reactor core (where the fuel is located). Reactor designs provide this function through the "emergency core cooling system," ECCS. An ECCS consists of several independent systems for pumping water into the reactor, any one of which would provide sufficient water to save the reactor in most cases — in all cases two would do the job.11 More details are given on the ECCS in the Chapter 6 Appendix . Without water cooling, reactor fuel heats up very rapidly, and it would require perhaps 30 seconds before water from the ECCS would flood the reactor vessel to a level at which the core is covered. During this time, the fuel would reach temperatures in the range 1,000-2,500°F. When the water from the ECCS first reaches the hot fuel, it would flash into steam, and at one time there was some concern as to whether this might prevent further water from reaching and cooling the fuel. Some of the first tests of small mock-ups, performed in 1970-1971, indicated that this might be the case. The problem thus received very wide publicity.12 This was the situation that brought the opposition group, Union of Concerned Scientists (UCS), into prominence, as they asked for a halt to reactor licensing until the problem was resolved.13 The culmination was a series of hearings held in Washington extending over a year in 1972-1973. As a result of the hearings, changes were introduced in reactor operation as a temporary measure to reduce the performance required of the ECCS if a LOCA should occur, and a crash research program costing hundreds of millions of dollars was instigated to settle the unresolved questions. As more sophisticated experimental tests and computer analyses were developed, it became increasingly clear in the 1975-1978 time period that the ECCS would work. There were over 50 tests, far more realistic and sophisticated than the 1971 tests, and all came out favorably. The question was finally resolved in 1978 when a test reactor specifically designed to test the ECCS (called LOFT, for loss of fluid test) came into operation at the Idaho Nuclear Engineering Laboratory and was put through various types of LOCAs. In all cases, the ECCS performed better than had been estimated.14 For example, in the first LOFT test, the best estimate from the computer analysis was a maximum temperature of 1,376°F, the conservative calculation used for the safety analysis gave 2,018°F, but the highest measured temperature was only 960°F. In the second LOFT test, carried out under rather different conditions, these temperatures were 1,360, 2,205, and 1,185°F, respectively. These examples also demonstrate how conservative estimates rather than "best estimates" are generally used in safety analyses. This is good engineering practice, but it is not usually recognized by those who use such estimates to frighten the public. One type of LOCA in which the ECCS would not prevent a meltdown is a large crack in the bottom of the reactor vessel, since water injected by the ECCS would simply pour out through that crack. This would not occur with pipe breaks since all significant pipes enter the vessel near its top. This problem was intensively investigated by the British as part of their decision to convert from their own type to American-type reactors, and they concluded15 that, in view of the large thicknesses (see Fig. 2 ) and high quality of the materials used, the probability of a large crack in the reactor vessel is so small as to be negligible. There is also an elaborate inspection program to ensure that the high quality of the reactor vessel material is maintained. One potential problem in this regard, "pressurized thermal shock," has received widespread publicity. It is discussed later in this chapter. While every effort is being made to block the roads to meltdown, there is always a possibility of a road being opened by successive failures in the various lines of defense we have described. Or perhaps there is some obscure road to meltdown that no one has ever thought of in spite of the many years of technical effort on this problem. If nuclear power becomes a flourishing industry, there probably will be meltdowns somewhere someday. But if and when they occur, there is still one final line of defense — the containment — which should protect the public from harm in most cases. Let's now consider the reliability of that line of defense. How Secure Is The Containment? In all of the accident scenarios we have considered, water and steam from inside the reactor pours out into the containment building. When water is pumped in by the emergency core cooling system, some of it overflows, and when it surrounds the fuel it boils into steam, which goes out through the break into the containment. We thus expect the containment to be filled with steam, with a lot of excess water on the floor. This is true in nearly all potential loss-of-coolant accidents, even if the system does not break open, as was the case in the Three Mile Island accident. In addition, heat is being fed into this water and steam by the radioactivity in the fuel, by chemical reactions of steam with the fuel casing, and by burning of the hydrogen generated in those reactions. The most important threat to the security of the containment is that this heat will raise the pressure of the steam to the point where it will exceed the holding power of the containment walls, about 10 times normal atmospheric pressure. In order to counteract this threat, there are systems for cooling the containment atmosphere.4,16 One such system sprays cool water into the air, a very efficient way of condensing steam; when it exhausts its stored water supply, it picks up water from the containment floor, cools it, and then sprays it into the air inside the containment. Another type of system consists of fans blowing containment air over tubes through which cool water is circulating. There are typically five of these systems, but only one (or in rare cases, two) need be operable in order to assure that the containment is adequately cooled. In most cases, one of the systems is driven by a diesel engine so as to be available in the event of an electric power failure. A more quantitative treatment of the containment cooling problem is given in the Chapter 6 Appendix . Since they are safety related, these systems are subject to elaborate quality control during their manufacture and are frequently inspected and tested, so it seems reasonable to expect at least most of these systems to function properly if an accident should occur. All of them were functional during the Three Mile Island accident, and that is why it has been concluded2 that the containment would have prevented the escape of radioactivity even if there had been a meltdown there. Unfortunately, containment security is not always that favorable. Some of the accident scenarios outlined above also affect the containment heat removal systems. For example, electric power failure would prevent pumps from operating. But more important are accidents in which the containment is bypassed.17 For example, in Fig. 3 we see that the steam generator contains tubes that are directly connected to the reactor, but the steam it generates passes out of the containment to the turbine. If the tubes rupture, there is then a direct path for the radioactivity in the reactor to get into the pipe leading to the turbine. If the reactor is still at its very high operating pressure, this high pressure could be transmitted into that pipe and break it open, releasing radioactivity outside the containment. Two other mechanisms for breaking open the containment have been discussed. One of these is a steam explosion, which has received considerable research attention7 and was publicized in the fictional movie "The China Syndrome." The worst situation is in a meltdown where the molten fuel falls into a pool of water at the bottom of the reactor vessel, producing so much steam so suddenly that the top of the reactor vessel would be blown off and hurled upward with so much force that it would break open the top of the containment building. This is a highly unlikely scenario. In "The China Syndrome" it is implied that a sufficiently powerful steam explosion can occur when the molten fuel melts its way into the ground and comes into contact with groundwater. Actually, only a tiny fraction of the molten fuel would be coming into first contact with groundwater at any given time, so steam would be produced gradually rather than as a single explosive release. A fictional movie need not be realistic, of course, but it is important for the audience to recognize that point. The movie also makes an issue of groundwater contamination following a meltdown accident. In actuality, were molten fuel to suddenly come into contact with groundwater, the latter would flash into steam, which would build up a pressure to keep the rest of the groundwater away. There would thus be little contact until the molten fuel cooled and solidified many days later. It would then be in the form of a glassy mass that would be highly insoluble in water, so there would be relatively little groundwater contamination. If that were judged to be a problem, there would be plenty of time to construct barriers to permanently isolate the radioactivity from groundwater thereafter. It is difficult to imagine a situation in which there would be any adverse health effects from groundwater contamination. The other possible mechanism for breaking the containment, a hydrogen explosion, has received substantial research attention18 and achieved notoriety in the Three Mile Island accident. The consensus of the research seems to be that even if all the hydrogen that could be generated in an accident were to explode at once, the forces would not be powerful enough to break most containments, including the one at Three Mile Island.18 Moreover, in nearly all circumstances the hydrogen would be produced gradually, and there are many sources of sparks (e.g., electric motors) which would cause it to burn in a series of fires and/or small explosions not nearly large enough to threaten the containment. Three of the U.S. pressurized water reactor (PWR) containments store large volumes of ice inside to reduce steam pressure in an accident. Since the presence of ice is a failproof method for cooling the surroundings and thereby avoiding high steam pressure, it was not considered necessary to build the containment walls so powerfully or to make the containment volumes so large. These containments are more vulnerable to a hydrogen explosion,18 and therefore they are fitted with numerous gadgets for generating sparks to be extra certain that hydrogen ignites before large quantities can accumulate.19 The boiling water reactor (BWR) containments are much smaller in volume than those of PWRs; hence, they are more vulnerable to pressures generated by a hydrogen explosion. Some of these BWRs are operated with an inert gas filling the containment.20 Since there is no oxygen present, hydrogen cannot combine with oxygen to explode. Other BWRs, however, are at substantial risk of containment failure due to hydrogen explosions, although the failure mode is such that most of the radioactivity would not escape.17 It would exit through a pool of water which would dissolve and thus retain the radioactive materials. Of course, explosions inside the containment, even if they do not crack the walls, can damage equipment, and this can cause problems. For example, if explosions disabled all of the heat removal systems, the containment might be broken by steam pressure. However, the probability for disabling many separate systems would be very small. The systems we have described in this section and the previous one for averting a catastrophic accident constitute a "defense in depth," which is the guiding principle in designs for reactor safety. If the quality assurance fails, the inspections ordinarily provide safety. If the inspection programs fail, the leak detection saves the day. If that fails, the ECCS protects the system. And if the ECCS fails, the containment averts damage to the public. Moreover, each of these systems is itself a defense in depth; for example, if one of the ECCS water injection systems fails, another can do its job, and if both fail a third can provide sufficient water. One sometimes hears statements to the effect that reactors are safe if everything goes right, but if any piece of equipment fails or if an operator makes a mistake, disaster will result. This statement is completely WRONG. In reactor design it is assumed that all sorts of things will go wrong — pipes will break, valves will stick, motors will fail, operators will push the wrong button, and so on, but there is "defense in depth" to cover these malfunctions or series of successive malfunctions. Of course the depth of the defense is not infinite. If each line of defense would crumble, one after the other, there could be a disaster. But as the depth of the defense is increased, the probability for this to happen is rapidly decreased. For example, if each line of defense has a chance of failure equal to that of drawing the ace of spades out of a deck of shuffled cards — one chance in 52, the probability for five successive lines of defense to fail is like the chance of drawing the ace of spades successively out of five decks of well-shuffled cards — one chance in 52 x 52 x 52 x 52 x 52, or one chance in 380 million! There have been cases where one of the lines of defense has failed in nuclear power plants. Utilities have been heavily fined by the NRC for such things as leaving a valve closed and thereby compromising the effectiveness of one of the emergency systems. These incidents are often given publicity as failures that could lead to a meltdown. But the media coverage rarely bothers to point out that there are several lines of defense remaining unbreached between these events and a meltdown — not to mention that there is still a major line of defense, the containment, remaining even if a meltdown occurs. The Probabilities In considering the hazards of a reactor meltdown accident, once again we find ourselves involved in a game of chance governed by the laws of probability. By setting up additional lines of defense, or by improving the ones we now have, we can reduce the probability of a major accident, but we can never reduce it to zero. This should not necessarily be discomforting since we already are engaged in innumerable other games of chance with disastrous consequences if we lose — natural phenomena like earthquakes and disease epidemics, and manmade threats like toxic chemical releases and dam failures, to name a few. In fact, participating in this new game of chance may save us from participating in others brought on by alternative actions, and it may therefore reduce our total risk: building a nuclear power plant may remove the need for a hydroelectric dam whose failure can cause a disaster, or for a coal-burning power plant whose air pollution might be disastrous. The important question is: what is the probability of a disastrous meltdown accident? Several studies have been undertaken to answer this question. The best known of these was sponsored by the NRC and directed by Dr. Norman Rasmussen, an MIT professor.7 It extended over several years, involved many dozens of scientists and engineers, and cost over $4 million before its final report was issued in 1975. The report bore document designation "WASH-1400" and was titled "Reactor Safety Study" (RSS). It was a probabilistic risk analysis (PRA) based on a method known as "fault tree analysis," which had been developed to evaluate safety problems in the aerospace industry. It is described briefly in the Chapter 6 Appendix . The history of the RSS does not stop in 1975. The Union of Concerned Scientists (UCS) published a critique of it21 in 1977 with its own probabilities, and we will quote some of its conclusions. An independent review panel chaired by Professor Harold Lewis of the University of California was commissioned by the NRC and reported22 in 1978. The principal finding of the Lewis panel was that the uncertainties in the probabilities given by the RSS were larger than originally stated, but that there is no reason to believe that the probabilities were either too large or too small. The Lewis panel also took exception to the 12-page Executive Summary issued with the RSS. The NRC accepted the Lewis panel report in 1979, so in our references to the RSS we will not use either the Executive Summary or the uncertainty estimates. In the late 1970s there were similar RSSs carried out in West Germany and in Sweden, using similar methodology and obtaining similar results. During this period, the RSS was pooh-poohed by opponents of nuclear power and was interpreted by the media and hence by the public as controversial. Nevertheless, the NRC continued to encourage development and improvement of PRA methodology, and PRAs were carried out for over a dozen U.S. reactors at a typical cost of $5 million each (the RSS had analyzed only two reactors, one PWR and one BWR). With all of this activity and effort, new ideas and procedures were developed, and older ones were shown to be wanting and were abandoned. During this period important new scientific and technical developments surfaced, and these were incorporated. For example, there were intensive studies of the chemical form of released radioactivity,23 which had only been guessed at in the RSS, and greatly improved estimates were obtained on the strength of containments. A new appreciation of the importance of earthquakes and fires arose,17 and extensive research was devoted to analysis of their impact. While the RSS was carried out by one team in 2 years at a time when PRA methodology was in its infancy and when there had been very little operating experience with nuclear reactors, these newer studies were done by many different teams over periods of more like 10 years, when PRA methodology was much more mature and there was many times as much operational experience, including several "accidents" of various magnitude, by far the worst of which was the one at Three Mile Island. One interesting new development has been abandonment of the word meltdown, largely replaced by core damage. In the early thinking about reactor accidents, the idea became prevalent that if any appreciable fuel melting would occur, the problem would continue to escalate until all of the fuel became a molten mass with an unstoppable internal heat source (the radioactivity). Hence it would melt its way through the reactor vessel and anything else that got in its way — down through the Earth and all the way to China was the picturesque exaggeration that led to the name "China Syndrome." More detailed studies showed that these ideas were grossly oversimplified, and the Three Mile Island accident was a clear counterexample — most of the fuel melted, but it did not even get out of the reactor vessel. It is even difficult to answer the question "Was the Three Mile Island accident a meltdown?" because that word is not clearly defined. "Core damage," on the other hand, allows discussion of the wide variety of circumstances that are now believed to be possible. It also allows consideration of the several "precursors" to core damage that have already been experienced in reactor operation. By noting what further failures could have caused these incidents to escalate into core damage and estimating the probabilities for these further failures, one can arrive at an independent estimate of the probability for a core damage accident. The results of the new PRAs are discussed in some detail in the Chapter 6 Appendix . There are many differences between these and the RSS, but when all is said and done, the bottom lines turn out to be quite similar.17 It is therefore not unreasonable to use the RSS results. There is a big advantage in doing so since the RSS gives many more details that are useful in the discussion. We therefore base the following discussion on the RSS. The RSS estimates that a reactor meltdown may be expected about once every 20,000 years of reactor operation; that is , if there were 100 reactors, there would be a meltdown once in 200 years.7 The report by the principal organization opposed to nuclear power, Union of Concerned Scientists (UCS),21 estimates one meltdown for every 2,000 years of reactor operation. In U.S.-type reactors, there have been over 2,000 years of commercial reactor operation worldwide plus almost 4,000 years of U.S. Navy reactor operation all without a meltdown (in the sense they are using the word). If the UCS estimate is correct, we should have expected three meltdowns by now, whereas according to the RSS, there is a 30% chance that we would have had one. We now turn to the consequences of a meltdown. Since it gives more detail, we will quote the results of the RSS here; the UCS viewpoint can be roughly interpreted as multiplying all consequences by a factor of 10. In most meltdowns the containment is expected to maintain its integrity for a long time, so the number of fatalities should be zero. In 1 out of 5 meltdowns there would be over 1,000 deaths, in 1 out of 100 there would be over 10,000 deaths, and in 1 out of 100,000 meltdowns, we would approach 50,000 deaths (the number we get each year from motor vehicle accidents). Considering all types, we expect an average of 400 fatalities per meltdown; the UCS estimate is 5,000. Since air pollution from coal burning is estimated to be causing 30,000 deaths each year in the United States (see Chapter 3 ), for nuclear power to be as dangerous as coal burning there would have to be 75 meltdowns per year (30,000 / 400 = 75), or 1 meltdown every 5 days somewhere in the United States, according to the RSS; according to UCS, there would have to be a meltdown every 2 months. Since there has never been a single meltdown, clearly we cannot expect one nearly that often. It is often argued that the deaths from air pollution are not very alarming because they are not detectable, and we cannot associate any particular deaths with coal burning. But the same is true of the vast majority of deaths from nuclear reactor accidents. They would materialize only as slight increases of the cancer rate in a large population. Even in the worst accident considered in the RSS, expected only once in 100,000 meltdowns, the 45,000 cancer deaths would occur among a population of about 10 million, with each individual's risk being increased by 0.5%. Typically, this would increase a person's risk of dying from cancer from 20.0% to 20.5%. This risk varies much more than that from state to state — 17.5% in Colorado and New Mexico, 19% in Kentucky, Tennessee, and Texas, 22% in New York, and 24% in Connecticut and Rhode Island — and these variations are rarely, if ever, noticed. It is thus reasonable to assume that the additional cancer risks, even to those involved in this most serious meltdown accident considered in the RSS, would never be noticed. If we are interested in detectable deaths that can be attributed to an accident, we must limit our consideration to acute radiation sickness, which can be induced by very high radiation doses, about a half million millirems in one day resulting in death within a month. This is a rather rare disease: there were three deaths due to it in the early years among workers in U.S. government nuclear programs, but there have been none for over 25 years now. According to the RSS, there would be no detectable deaths in 98 out of 100 meltdowns, there would be over 100 such deaths in one out of 500 meltdowns, over 1,000 in one out of 5,000 meltdowns, and in one out of 100,000 meltdowns there would be about 3,500 detectable fatalities. The largest number of detectable fatalities to date from an energy-related incident was an air pollution episode in London in 1952 in which 3,500 deaths directly attributable to the pollution occurred within a few days.24 Thus, with regard to detectable fatalities, the equivalent of the worst nuclear accident considered in the RSS — expected once in 100,000 meltdowns — has already occurred with coal burning. But the nuclear accidents we have been discussing are hypothetical, and if we want to consider hypothetical accidents, very high consequences are not difficult to find. For example there are at least two hydroelectric dams in the United States whose sudden rupture would kill over 200,000 people.7 There are hypothetical explosions of liquefied natural gas that can wipe out a whole city. If we get into possibilities of incubating or spreading germs, or of subtle chemical effects, we can easily imagine even more devastating scenarios arising due to air pollution from coal or oil burning plants. It is sometimes said that nuclear accidents may be extremely rare, but when they occur they are so devastating as to make the whole technology unacceptable. From the above comparisons it is clear that this argument holds no water. For another perspective, we embrace a technology that kills 50,000 Americans every year. Every one of these deaths is clearly detectable, and that technology seriously injures more than 10 times that many. I refer here to motor vehicles. Even if we had a meltdown every 10 years, a nuclear power accident would kill that many only once in a million years. The Worst Possible Accident One subject we have not discussed here is the "worst possible nuclear accident," because there is no such thing. In any field of endeavor, it is easy to concoct a possible accident scenario that is worse than anything that has been previously proposed, although it will be of lower probability. One can imagine a gasoline spill causing a fire that would wipe out a whole city, killing most of its inhabitants. It might require a lot of improbable circumstances combining together, like water lines being frozen to prevent effective fire fighting, a traffic jam aggravated by street construction or traffic accidents limiting access to fire fighters, some substandard gas lines which the heat from the fire caused to leak, a high wind frequently shifting to spread the fire in all directions, a strong atmospheric temperature inversion after the whole city has become engulfed in flame to keep the smoke close to the ground, a lot of bridges and tunnels closed for various reasons, eliminating escape routes, some errors in advising the public, and so forth. Each of these situations is improbable, so a combination of many of them occurring in sequence is highly improbable, but it is certainly not impossible. If anyone thinks that is the worst possible consequence of a gasoline spill, consider the possibility of the fire being spread by glowing embers to other cities which were left without protection because their firefighters were off assisting the first city; or of a disease epidemic spawned by unsanitary conditions left by the conflagration spreading over the country; or of communications foul-ups and misunderstandings caused by the fire leading to an exchange of nuclear weapon strikes. There is virtually no limit to the damage that is possible from a gasoline spill. But as the damage envisioned increases, the number of improbable circumstances required increases, so the probability for the eventuality becomes smaller and smaller. There is no such thing as the "worst possible accident," and any consideration of what terrible accidents are possible without simultaneously considering their low probability is a ridiculous exercise that can lead to completely deceptive conclusions. The same reasoning applies to nuclear reactor accidents. Situations causing any number of deaths are possible, but the greater the consequences, the lower is the probability. The worst accident the RSS considered would cause about 50,000 deaths, with a probability of one occurrence in a billion years of reactor operation. A person's risk of being a victim of such an accident is 20,000 times less than the risk of being killed by lightning, and 1,000 times less than the risk of death from an airplane crashing into his or her house.7 But this once-in-a-billion-year accident is practically the only nuclear reactor accident ever discussed in the media. When it is discussed, its probability is hardly ever mentioned, and many people, including Helen Caldicott, who wrote a book on the subject, imply that it's the consequence of an average meltdown rather than of 1 out of 100,000 meltdowns. I have frequently been told that the probability doesn't matter — the very fact that such an accident is possible makes nuclear power unacceptable. According to that way of thinking, we have shown that the use of gasoline is not acceptable, and almost any human activity can similarly be shown to be unacceptable. If probability didn't matter, we would all die tomorrow from any one of thousands of dangers we live with constantly. Land Contamination Another aspect of a reactor meltdown accident that has been widely publicized is land contamination. The most common media version is that it would contaminate an area the size of the state of Pennsylvania, 45,000 square miles. Of course this depends on one's definition of "contaminate." It could be said that the whole world is contaminated, because there is natural radioactivity everywhere; or that the state of Colorado is contaminated because the natural radiation there is twice as high as in most other states. However, the Federal Radiation Council in the United States and similar official agencies in other countries have adopted criteria for the upper level of contamination that is acceptable before people must be evacuated. This level corresponds roughly to doubling or tripling the average lifetime dose that would be received from natural radiation and medical X-rays, or 2 to 5 times as much extra radiation as would be received by the average American from moving to Colorado. It is still 4 to 10 times less than the natural radiation received by people living in some areas of India and Brazil. Studies of these people have given no evidence of health problems from their radiation exposure.25 With this definition, the worst meltdown accident considered7 in the RSS — about 1% of all meltdowns might be this bad — would contaminate an area of 3,000 square miles, the area of a circle with a 30-mile radius. About 90% of this area could be cleaned up by simply using fire hoses on built-up areas, and plowing the open ground, but people would have to be relocated from the remaining 10%, an area equal to that of a circle with a 10-mile radius. In assessing the impacts of this land contamination, I believe the appropriate measure is the monetary cost; the cost of decontaminating, relocating people, compensating for lost property and lost working time, buying up and destroying contaminated farm products, and so on. Some might argue that it is unfair to concentrate on money and ignore the human problems in relocation, but that is part of reality. Forced relocation is a common practice in building hydroelectric dams (which flood large land areas), highway construction, slum clearance projects, and so forth, and in these contexts the monetary cost and advantages to be gained are always the prime consideration in deciding on whether to undertake the project. In most meltdowns, the cost would be less than $50 million (all costs are in 1975 dollars); in 1 out of 10 meltdowns, it would exceed $300 million; in 1 out of 100 meltdowns, it would exceed $2 billion; and once in 10,000 meltdowns, it would be as much as $15 billion. Over all cases, the average cost would be about $100 million. Generating electricity by coal burning is estimated26 to do about $600 million per year in property damage, destroying clothing, eroding building materials, and so forth. Thus it would require six meltdowns per year — one every 2 months — for the monetary cost to the public from reactor accidents to equal that from coal burning. Clearly, health impacts are more important than property damage in determining the risks of generating electricity, but the relative risks of nuclear power and coal are not very different for the two. Public Misunderstanding In this chapter we have shown that there have been serious misunderstandings of reactor meltdown accidents in the public mind. In most such accidents there would be no harm to the public, and the average meltdown would cause 400 fatalities and do $100 million in off-site damage. Even in the worst 0.001% of accidents, the increased cancer risk to those involved is much less than that of moving from other parts of the country to New England. This is a far cry from the public image of many thousands of dead bodies lying around in a vast area of devastation, and it certainly is not "the ultimate disaster." Only a tiny fraction of the public recognizes that for nuclear accidents to be as dangerous as coal burning, we would have to experience a meltdown every 5 days. The consequences of the misunderstandings have been tragic. Surely no one believes that we will have a meltdown every five days, or even every few months. We have never even had a large scale evacuation, which would be the first step if there was any apparent danger to the public. Mass evacuations following other type accidents are quite common. Chemical spills lead to evacuation of hundreds of people several times per year in the United States. In 1979, as a result of an accident of a railroad tank car carrying a dangerous chemical, there was a mass evacuation from a suburb of Toronto, involving over 100,000 people for several days. Nevertheless, because of the misunderstandings attending nuclear accidents, utilities have continued to build coal-fired rather than nuclear plants. Every time this is done, thousands of Americans are condemned to a premature death. Nonsafety Issues Any new technology is bound to encounter numerous technical problems that must be ironed out, and there has never been any reason to believe that nuclear technology should be an exception in this regard. However, contrary to the situation in other industries, technical problems in the nuclear industry often received widespread media exposure, causing them to be interpreted as safety issues. Nearly any technical problem can indeed become a safety issue if it is consistently ignored. If an automobile runs out of lubricating oil, it could stall on a railroad crossing, which is clearly a safety problem. But the oil level is easy to check, there is a warning light indicating loss of oil pressure, and if the oil did run out, ominous grinding noises would alert you before the car would stall. Loss of lube oil is therefore not ordinarily considered to be a safety problem. It can be inconvenient, costly to fix, and may cause expensive damage to the engine, but it surely ranks far down on any list of safety hazards in automobiles. However, if the problem were not so familiar to a large segment of the population, the publicizing of one such case could easily scare people with stories about the possibility of automobiles stalling on railroad crossings or in other precarious situations due to loss of lube oil. Analogous situations have been reported as safety issues for the nuclear industry. Let us review a few of them here. Pressurized Thermal Shock27 The thick steel vessel housing the reactor is normally very hot because of the high temperature of the water inside (600°F). If, due to some malfunction, the inside is suddenly filled with cool water, the vessel experiences what is called "thermal shock." If it is then subjected to high pressure — producing pressurized thermal shock (PTS) — there is an increased tendency for the vessel to crack, rather than simply to stretch, if a small crack or imperfection already exists. The importance of PTS problems depends on quantitative details — how much of a thermal shock followed by how much pressure causes how much of an increased tendency to crack. Under ordinary conditions these quantitative details indicate that there is nothing to be concerned about. However, just as radiation can damage biological tissue, it can damage steel by knocking electrons and atoms out of their normal locations. This radiation damage to the reactor vessel aggravates its susceptibility to PTS. Scientists recognized this problem over 20 years ago and they found a simple remedy for it — reducing the quantity of copper in the steel alloy from which the vessel is fabricated. This remedy was implemented in 1971, and all reactor vessels fabricated since that time have had no problems with PTS. Reactor vessels fabricated before 1971 are kept under periodic observation to keep track of the problem. For many years, the NRC, burdened by other more urgent problems, put off considering PTS by adopting a very conservative screening criterion to indicate when further action on it would be undertaken. In 1981, time for action according to that criterion was only 1 or 2 years away in some reactors; hence, the NRC began to look into the problem in more detail by requesting information from various power plants. Misinterpreting these requests, a prominent newspaper ran a page-one story28 headlined "Steel Turned Brittle by Radiation Called a Peril at 13 Nuclear Plants," broadly implying that serious safety problems were immediately at issue. Opponents of nuclear power soon began trumpeting that message. They claimed that reactor vessels would crumble like glass under PTS, although no such behavior has ever been observed in the numerous laboratory tests of PTS. In 1981-1982, the NRC and the nuclear industry delved into the PTS problem rather deeply. In 1982, the NRC came up with new conclusions and regulations. When the radiation damage reaches the stage where action is required, several remedies are available, although not all are applicable in all situations. One way to postpone the problem is to redistribute the fuel in the reactor so as to reduce the radiation striking the walls of the vessel — this is now being done in several plants. One remedy for PTS is to keep the water storage tanks heated to reduce the thermal shock that would be caused by sudden water injections. Another option is to change operating procedures to reduce the suddenness with which this water can be introduced. The most complete remedy, which is also the most time consuming and expensive, is to heat the reactor vessel to a very high temperature (850°F) to anneal out the radiation damage; this would, in fact, make the vessel as good as new. The NRC standard is a conservative one. It is based on the assumption that there is a small crack or flaw in the vessel, although these vessels are very carefully inspected and no small cracks or flaws have been found. The vessel is typically 8 inches thick, so the outside is exposed to considerably less radiation and thermal shock than the inside; therefore, even if there should be cracking inside, it would probably not extend all the way through the thickness of the vessel and there would consequently be no danger from it. As long as the problem is recognized, is under constant surveillance, has remedies, and will not be allowed to reach the danger point, it seems fair to classify pressurized thermal shock as a technical problem rather than as a safety issue. It should therefore receive the attention of scientists and engineers, but there is no reason for the public to preoccupy itself with it. Stress Corrosion Cracking of Pipes29 There have been a number of situations in which pipes in boiling water reactors have been found to have cracks. Since a pipe cracking open is a widely heralded potential cause for a LOCA, this problem has received extensive media coverage as a potential threat, especially when the first such crack was discovered in 1975. However, researchers have established that this type of cracking develops very slowly and is easily detected by ultrasonic tests in its very initial stages. If not, it leads to slow leaks which are readily detected and repaired. Stress corrosion cracking is therefore not a safety issue. On the other hand, this problem has caused expensive shutdowns for repairs, and has therefore been an important problem for power plant owners. They have consequently invested tens of millions of dollars on research to overcome it. The first fruit of this research was to gain an understanding of the problem: welding stainless steel pipe joints caused some of the chromium that makes that material corrosion resistant to migrate away, reducing its local concentration from the normal 17% to below the 12% minimum for resistance to corrosion by excess oxygen in the water. Moreover, once this migration of chromium is started by the welding, the heat of the reactor water continues the process. A combination of this corrosion with stress on the material was found to cause the cracking. Once the problem was understood, researchers rapidly found solutions. A new alloy with less carbon and more nitrogen, called nuclear-grade stainless steel, was developed which virtually eliminates the problem in new pipe. Investigators found that in the old type pipe, the chromium migration could be reversed by heating the welded joint in a furnace to 1,950°F, or by putting a lining of weld metal inside the pipe before the outside is welded. In addition to avoiding the chromium migration, methods have been developed to relieve the stress by running cooling water inside the pipe while the joint is being welded, or by heating the outside of the pipe while cooling the inside after the welding is completed. This last method is applicable without removing installed pipes. All of these methods are now being applied in operating plants. Moreover, researchers are developing methods for reducing the free oxygen content in the water, the principal chemical agent responsible for the corrosion. All three factors, chromium migration, mechanical stress, and a corrosive chemical agent, are necessary to cause the cracking, and all three of them have been reduced by these measures. An automated ultrasonic testing system has been developed to predict which welds are most likely to fail and to estimate their remaining service life. All this progress has put stress corrosion cracking of pipes well under control. Steam Generator Tube Leaks30 Fig. 3 — Diagram (highly simplified) of pressurized water reactor power plant. Water is heated to 6,000°F by energy released in fission reactions in the reactor (it is prevented from boiling by maintaining high pressure), and pumped into the steam generator, where its heat is transferred to a secondary water system. The water in the latter is thereby boiled to make steam which drives the turbine — otherwise there would be no tendency for the steam to rush through the turbine and thereby cause it to turn. The steam is condensed in the condenser by cooling it with water brought in from some outside source. The water formed by condensation is pumped back into the steam generator to be reused. A diagram of a pressurized water reactor (PWR) is shown in Fig. 3 . The water in the reactor is kept under sufficiently high pressure that it does not boil and become steam. Rather it is pumped through the tubes of "steam generators" where it transfers its heat to the water from a separate "secondary" system, causing the latter to boil into steam. This has some advantage (and some disadvantages) over the simpler system of generating the steam by boiling the water in the reactor, as in the BWR. One of the advantages is that the water from the reactor, which contains radioactive contaminants, never gets into the other areas of the plant (turbine, condenser, etc.), so less attention to radioactivity control is needed in those areas. However, leaks in steam generator tubes do allow radioactivity to reach those areas, and since they have minimal radioactivity control, it can easily escape from there into the environment. A large fraction of American PWRs have experienced problems with steam generator tube leaks. There are many thousands of these tubes in a steam generator; therefore leaking tubes can simply be plugged-up at both ends without affecting operation. However, when the number of plugged tubes exceeds about 20% of the total, as it has in some plants, the electrical generating capacity is significantly reduced. This represents a costly loss of revenue to the utility. In at least three cases, the utility has decided to replace their steam generators, a rather expensive alternative requiring many months of shutdown. From the safety viewpoint, the worst accident worthy of consideration in this area is a sudden complete rupture of a few tubes. Such an accident might be expected once every several years. This is what happened at the Ginna plant near Rochester, New York, in January 1982. That accident generated a great deal of publicity, but the maximum exposure at any off-site point was 0.5 mrem,31 less than the average American receives from natural sources every day. Since there were no people staying all day at such points, no member of the public received even that much exposure. The total of the exposures to the whole population in the area was less than 100 mrem, which gives only 1 chance in 80,000 that there will ever be a single death resulting. On the other hand, it has been a costly problem for utilities, and a great deal of research has been devoted to solving it. Eight separate classes of failures have been identified — denting, erosion-corrosion, fatigue, fretting, intergranular attack, pitting, stress corrosion cracking, and wastage. Researchers have developed a number of different methods for reducing these problems and for avoiding them in new plants. They have also developed new methods for detecting, locating, evaluating, and repairing leaks. The NRC keeps a close watch on these problems to be certain that public safety is not compromised, in spite of the very small potential of steam generator leaks to cause radiation exposure to the public. It requires frequent testing for leaks, and has strict limits on the amount of leakage that can be tolerated before the reactor is shut down for repairs. It also maintains research programs to achieve improved understanding, evaluations, and predictability of future problems. The industry itself is also doing a great deal of research on the problem. Chapter 6 Appendix Probabilistic Risk Analysis Probabilistic risk analysis, widely known as PRA, is the science of estimating the probability that some event will occur. The type of PRA used in analyzing reactor accidents, called "fault tree analysis," begins with identifying all "routes" leading to a meltdown. Each route consists of a succession of failures, like pipes cracking, pumps breaking down, valves sticking, operators pushing the wrong button, and so on. Since a given route will not lead to meltdown unless each of these failures occurs in turn, the probability of meltdown by that route is obtained by multiplying the probabilities for each individual failure. For example, if one particular route to meltdown consists of a pipe cracking badly — expected once in 1,000 years of operation — followed by a pump failing to operate — expected once in 100 trials — followed by a valve sticking closed — expected once in 200 attempts to open it, the chance that each of these three failures will occur successively in a given year is __1_
Meltdown
Which electronic device magnifies the strength of a signal?
Fukushima Nuclear Accident – a simple and accurate explanation | Brave New Climate Fukushima Nuclear Accident – a simple and accurate explanation Other translations: Italian , Spanish , German , 普通话 ——————– Along with reliable sources such as the IAEA and WNN updates, there is an incredible amount of misinformation and hyperbole flying around the internet and media right now about the Fukushima nuclear reactor situation . In the BNC post  Discussion Thread – Japanese nuclear reactors and the 11 March 2011 earthquake (and in the many comments that attend the top post), a lot of technical detail  is provided, as well as regular updates. But what about a layman’s summary? How do most people get a grasp on what is happening, why, and what the consequences will be? Below I reproduce a summary on the situation prepared by Dr Josef Oehmen, a research scientist at MIT, in Boston. He is a PhD Scientist, whose father has extensive experience in Germany’s nuclear industry. This was first posted by Jason Morgan earlier this evening , and he has kindly allowed me to reproduce it here. I think it is very important that this information be widely understood. Please also take the time to read this:  An informed public is key to acceptance of nuclear energy — it was never more relevant than now. ——————————— NOTE: Content Updated 15 March, see:  http://mitnse.com/ We will have to cover some fundamentals, before we get into what is going on. Construction of the Fukushima nuclear power plants The plants at Fukushima are Boiling Water Reactors (BWR for short). A BWR produces electricity by boiling water, and spinning a a turbine with that steam. The nuclear fuel heats water, the water boils and creates steam, the steam then drives turbines that create the electricity, and the steam is then cooled and condensed back to water, and the water returns to be heated by the nuclear fuel. The reactor operates at about 285 °C. The nuclear fuel is uranium oxide. Uranium oxide is a ceramic with a very high melting point of about 2800 °C. The fuel is manufactured in pellets (cylinders that are about 1 cm tall and 1 com in diameter). These pellets are then put into a long tube made of Zircaloy (an alloy of zirconium) with a failure temperature of 1200 °C (caused by the auto-catalytic oxidation of water), and sealed tight. This tube is called a fuel rod. These fuel rods are then put together to form assemblies, of which several hundred make up the reactor core. The solid fuel pellet (a ceramic oxide matrix) is the first barrier that retains many of the radioactive fission products produced by the fission process.  The Zircaloy casing is the second barrier to release that separates the radioactive fuel from the rest of the reactor. The core is then placed in the pressure vessel. The pressure vessel is a thick steel vessel that operates at a pressure of about 7 MPa (~1000 psi), and is designed to withstand the high pressures that may occur during an accident. The pressure vessel is the third barrier to radioactive material release. The entire primary loop of the nuclear reactor – the pressure vessel, pipes, and pumps that contain the coolant (water) – are housed in the containment structure.  This structure is the fourth barrier to radioactive material release. The containment structure is a hermetically (air tight) sealed, very thick structure made of steel and concrete. This structure is designed, built and tested for one single purpose: To contain, indefinitely, a complete core meltdown. To aid in this purpose, a large, thick concrete structure is poured around the containment structure and is referred to as the secondary containment. Both the main containment structure and the secondary containment structure are housed in the reactor building. The reactor building is an outer shell that is supposed to keep the weather out, but nothing in. (this is the part that was damaged in the explosions, but more to that later). Fundamentals of nuclear reactions The uranium fuel generates heat by neutron-induced nuclear fission. Uranium atoms are split into lighter atoms (aka fission products). This process generates heat and more neutrons (one of the particles that forms an atom). When one of these neutrons hits another uranium atom, that atom can split, generating more neutrons and so on. That is called the nuclear chain reaction. During normal, full-power operation, the neutron population in a core is stable (remains the same) and the reactor is in a critical state. It is worth mentioning at this point that the nuclear fuel in a reactor can never cause a nuclear explosion like a nuclear bomb. At Chernobyl, the explosion was caused by excessive pressure buildup, hydrogen explosion and rupture of all structures, propelling molten core material into the environment.  Note that Chernobyl did not have a containment structure as a barrier to the environment. Why that did not and will not happen in Japan, is discussed further below. In order to control the nuclear chain reaction, the reactor operators use control rods. The control rods are made of boron which absorbs neutrons.  During normal operation in a BWR, the control rods are used to maintain the chain reaction at a critical state. The control rods are also used to shut the reactor down from 100% power to about 7% power (residual or decay heat). The residual heat is caused from the radioactive decay of fission products.  Radioactive decay is the process by which the fission products  stabilize themselves by emitting energy in the form of small particles (alpha, beta, gamma, neutron, etc.).  There is a multitude of fission products that are produced in a reactor, including cesium and iodine.  This residual heat decreases over time after the reactor is shutdown, and must be removed by cooling systems to prevent the fuel rod from overheating and failing as a barrier to radioactive release. Maintaining enough cooling to remove the decay heat in the reactor is the main challenge in the affected reactors in Japan right now. It is important to note that many of these fission products decay (produce heat) extremely quickly, and become harmless by the time you spell “R-A-D-I-O-N-U-C-L-I-D-E.”  Others decay more slowly, like some cesium, iodine, strontium, and argon. What happened at Fukushima (as of March 12, 2011) The following is a summary of the main facts. The earthquake that hit Japan was several times more powerful than the worst earthquake the nuclear power plant was built for (the Richter scale works logarithmically; for example the difference between an 8.2 and the 8.9 that happened is 5 times, not 0.7). When the earthquake hit, the nuclear reactors all automatically shutdown. Within seconds after the earthquake started, the control rods had been inserted into the core and the nuclear chain reaction stopped. At this point, the cooling system has to carry away the residual heat, about 7% of the full power heat load under normal operating conditions. The earthquake destroyed the external power supply of the nuclear reactor. This is a challenging accident for a nuclear power plant, and is referred to as a “loss of offsite power.” The reactor and its backup systems are designed to handle this type of accident by including backup power systems to keep the coolant pumps working. Furthermore, since the power plant had been shut down, it cannot produce any electricity by itself. For the first hour, the first set of multiple emergency diesel power generators started and provided the electricity that was needed. However, when the tsunami arrived (a very rare and larger than anticipated tsunami) it flooded the diesel generators, causing them to fail. One of the fundamental tenets of nuclear power plant design is “Defense in Depth.” This approach leads engineers to design a plant that can withstand severe catastrophes, even when several systems fail. A large tsunami that disables all the diesel generators at once is such a scenario, but the tsunami of March 11th was beyond all expectations. To mitigate such an event, engineers designed an extra line of defense by putting everything into the containment structure (see above), that is designed to contain everything inside the structure. When the diesel generators failed after the tsunami, the reactor operators switched to emergency battery power. The batteries were designed as one of the backup systems to provide power for cooling the core for 8 hours. And they did. After 8 hours, the batteries ran out, and the residual heat could not be carried away any more.  At this point the plant operators begin to follow emergency procedures that are in place for a “loss of cooling event.” These are procedural steps following the “Depth in Defense” approach. All of this, however shocking it seems to us, is part of the day-to-day training you go through as an operator. At this time people started talking about the possibility of core meltdown, because if cooling cannot be restored, the core will eventually melt (after several days), and will likely be contained in the containment. Note that the term “meltdown” has a vague definition. “Fuel failure” is a better term to describe the failure of the fuel rod barrier (Zircaloy).  This will occur before the fuel melts, and results from mechanical, chemical, or thermal failures (too much pressure, too much oxidation, or too hot). However, melting was a long ways from happening and at this time, the primary goal was to manage the core while it was heating up, while ensuring that the fuel cladding remain intact and operational for as long as possible. Because cooling the core is a priority, the reactor has a number of independent and diverse cooling systems (the reactor water cleanup system, the decay heat removal, the reactor core isolating cooling, the standby liquid cooling system, and others that make up the emergency core cooling system). Which one(s) failed when or did not fail is not clear at this point in time. Since the operators lost most of their cooling capabilities due to the loss of power, they had to use whatever cooling system capacity they had to get rid of as much heat as possible. But as long as the heat production exceeds the heat removal capacity, the pressure starts increasing as more water boils into steam. The priority now is to maintain the integrity of the fuel rods by keeping the temperature below 1200°C, as well as keeping the pressure at a manageable level. In order to maintain the pressure of the system at a manageable level, steam (and other gases present in the reactor) have to be released from time to time. This process is important during an accident so the pressure does not exceed what the components can handle, so the reactor pressure vessel and the containment structure are designed with several pressure relief valves. So to protect the integrity of the vessel and containment, the operators started venting steam from time to time to control the pressure. As mentioned previously, steam and other gases are vented.  Some of these gases are radioactive fission products, but they exist in small quantities. Therefore, when the operators started venting the system, some radioactive gases were released to the environment in a controlled manner (ie in small quantities through filters and scrubbers). While some of these gases are radioactive, they did not pose a significant risk to public safety to even the workers on site. This procedure is justified as its consequences are very low, especially when compared to the potential consequences of not venting and risking the containment structures’ integrity. During this time, mobile generators were transported to the site and some power was restored.  However, more water was boiling off and being vented than was being added to the reactor, thus decreasing the cooling ability of the remaining cooling systems. At some stage during this venting process, the water level may have dropped below the top of the fuel rods.  Regardless, the temperature of some of the fuel rod cladding exceeded 1200 °C, initiating a reaction between the Zircaloy and water. This oxidizing reaction produces hydrogen gas, which mixes with the gas-steam mixture being vented.  This is a known and anticipated process, but the amount of hydrogen gas produced was unknown because the operators didn’t know the exact temperature of the fuel rods or the water level. Since hydrogen gas is extremely combustible, when enough hydrogen gas is mixed with air, it reacts with oxygen. If there is enough hydrogen gas, it will react rapidly, producing an explosion. At some point during the venting process enough hydrogen gas built up inside the containment (there is no air in the containment), so when it was vented to the air an explosion occurred. The explosion took place outside of the containment, but inside and around the reactor building (which has no safety function).  Note that a subsequent and similar explosion occurred at the Unit 3 reactor. This explosion destroyed the top and some of the sides of the reactor building, but did not damage the containment structure or the pressure vessel. While this was not an anticipated event, it happened outside the containment and did not pose a risk to the plant’s safety structures. Since some of the fuel rod cladding exceeded 1200 °C, some fuel damage occurred. The nuclear material itself was still intact, but the surrounding Zircaloy shell had started failing. At this time, some of the radioactive fission products (cesium, iodine, etc.) started to mix with the water and steam. It was reported that a small amount of cesium and iodine was measured in the steam that was released into the atmosphere. Since the reactor’s cooling capability was limited, and the water inventory in the reactor was decreasing, engineers decided to inject sea water (mixed with boric acid – a neutron absorber) to ensure the rods remain covered with water.  Although the reactor had been shut down, boric acid is added as a conservative measure to ensure the reactor stays shut down.  Boric acid is also capable of trapping some of the remaining iodine in the water so that it cannot escape, however this trapping is not the primary function of the boric acid. The water used in the cooling system is purified, demineralized water. The reason to use pure water is to limit the corrosion potential of the coolant water during normal operation. Injecting seawater will require more cleanup after the event, but provided cooling at the time. This process decreased the temperature of the fuel rods to a non-damaging level. Because the reactor had been shut down a long time ago, the decay heat had decreased to a significantly lower level, so the pressure in the plant stabilized, and venting was no longer required. ***UPDATE – 3/14 8:15 pm EST*** Units 1 and 3 are currently in a stable condition according to TEPCO press releases, but the extent of the fuel damage is unknown.  That said, radiation levels at the Fukushima plant have fallen to 231 micro sieverts (23.1 millirem) as of 2:30 pm March 14th (local time). ***UPDATE – 3/14 10:55 pm EST*** The details about what happened at the Unit 2 reactor are still being determined.  The post on what is happening at the Unit 2 reactor contains more up-to-date information.  Radiation levels have increased, but to what level remains unknown.
i don't know
What was the name of the unit of heat now replaced by the joule?
Units of Heat - <i>BTU, Calorie and Joule</i> The most common units for heat are BTU (Btu) - British Thermal Unit - also known as a "heat unit" in United States Calorie BTU - British Thermal Unit The unit of heat in the imperial system - the BTU - is the amount of heat required to raise the temperature of one pound of water through 1oF (58.5oF - 59.5oF) at sea level (30 inches of mercury). 1 Btu (British thermal unit) = 1055.06 J = 107.6 kpm = 2.931 10-4 kWh = 0.252 kcal = 778.16 ft.lbf = 1.0551010 ergs = 252 cal = 0.293 watt-hours An item using one kilowatt-hour of electricity generates 3412 Btu. one hundred thousand (105) Btu are called a therm Calorie A calorie is commonly defined as the amount of heat required to raise the temperature of one gram of water 1oC the kilogram calorie, large calorie, food calorie, Calorie (capital C) or just calorie (lowercase c) is the amount of energy required to raise the temperature of one kilogram of water by one degree Celsius 1 calorie (cal) = 1/860 international watthour (Wh) 1 kcal = 4186.8 J = 426.9 kp m = 1.163 10-3 kWh = 3.088 ft lbf = 3.9683 Btu = 1000 cal Be aware that alternative definitions exists - in short:  Thermochemical calorie   International Steam Table calorie (1929) International Steam Table calorie (1956) IUNS calorie (Committee on Nomenclature of the International Union of Nutritional Sciences) The calorie is outdated and commonly replaced by the SI-unit Joule. Joule The unit of heat in the SI-system the Joule is a unit of energy equal to the work done when a force of one newton acts through a distance of one meter 4.184 joule of heat energy (or one calorie) is required to raise the temperature of a unit weight (1 g) of water from 0oC to 1oC, or from 32oF to 33.8oF 1 J (Joule) = 0.1020 kpm = 2.778 10-7 kWh = 2.389 10-4 kcal = 0.7376 ft.lbf = 1 kg.m2/s2 = 1 watt second = 1 Nm = 9.478 10-4 Btu Heating - Heating systems - capacity and design of boilers, pipelines, heat exchangers, expansion systems and more Thermodynamics - Effects of work, heat and energy on systems Sponsored Links Heat Storage in Materials - Energy stored as sensible heat in materials Specific Heat of common Substances - Specific heat of materials like wet mud, granite, sandy clay, quartz sand and more Gases - Ratios of Specific Heat - Ratios of specific heat for gases in constant pressure and volume processes Heat, Work and Energy - Heat, work and energy tutorial - essentials as specific heat Specific Heat - Specific Heat is the amount of heat required to change a unit mass of a substance by one degree in temperature Sponsored Links en: heat units btu calorie joule es: unidades de calor por efecto Joule calorías btu de: Wärmeeinheiten btu Kalorien Joule Search the Engineering ToolBox
Calorie
What does c represent in the equation e = mc*2?
History of the Calorie in Nutrition © 2006 American Society for Nutrition History of the Calorie in Nutrition Department of Foods and Nutrition, University of Georgia, Athens, GA 30602 ↵ *To whom correspondence should be addressed. E-mail: jhargrov{at}fcs.uga.edu.   Next Section Abstract The calorie was not a unit of heat in the original metric system. Some histories state that a defined Calorie (modern kcal) originated with Favre and Silbermann in 1852 or Mayer in 1848. However, Nicholas Clément introduced Calories in lectures on heat engines that were given in Paris between 1819 and 1824. The Calorie was already defined in Bescherelle's 1845 Dictionnaire National. In 1863, the word entered the English language through translation of Ganot's popular French physics text, which defined a Calorie as the heat needed to raise the temperature of 1 kg of water from 0 to 1°C. Berthelot distinguished between g- and kg-calories by 1879, and Raymond used the kcal in a discussion of human energy needs in an 1894 medical physiology text. The capitalized Calorie as used to indicate 1 kcal on U.S. food labels derives from Atwater's 1887 article on food energy in Century magazine and Farmers' Bulletin 23 in 1894. Formal recognition began in 1896 when the g-calorie was defined as a secondary unit of energy in the cm-g-s measurement system. The thermal calorie was not fully defined until the 20th century, by which time the nutritional Calorie was embedded in U.S. popular culture and nutritional policy. Previous Section Next Section Introduction The thermal units of g-calorie (4.184 J) and kg-calorie (kcal or capitalized Calorie; 4.184 kJ; Calorie is capitalized when the original text refers to a kg-calorie; the lower case word denotes a g-calorie) are so familiar in nutrition that one tends not to ask when they were first defined or how they entered common usage. The hypothesis of this paper is that the French word, calorie, had been coined and defined by 1824 and was originally used in the sense of a 0–1°C kcal ( 1 – 4 ). The Dictionnaire de l'Académie Française lists “CALORIE n. f. XIXe siècle.” If so, this contradicts the statement that “Lavoisier served on the committee that developed the cm-g-s measurement system (CGS) which defined the “calorie” used today” ( 5 ). There is no documentation that the original metric commission defined a unit of heat. If Lavoisier's work is too early, other attributions of 1848–53 ( 3 , 6 , 7 ) are clearly too late, because the Calorie had already been defined in the 1845 edition of Bescherelle's dictionary of the French language ( 8 ). Prior histories of the Calorie in nutrition do not discuss the origin of the calorie or kcal as units of heat ( 9 , 10 ). This article will show that use of the Calorie as a defined unit of heat developed concurrently with the metric system but not as a recognized metric unit and dates to work in chemistry or engineering no later than 1819–1824. A timeline of some key events is shown in Figure 1 . View larger version: Download as PowerPoint Slide Figure 1 Timeline of 19th century events in the history of the metric system and calorie. BAAS, British Association for Advancement of Science. In addition to its technical usage by scientists, the word Calorie entered the popular vocabulary across Europe and the United States by the late 19th century. For example, the Oxford English dictionary cites E. Atkinson's 1863 translation of Adolphe Ganot's French physics text ( 11 ) as the first occurrence of the Calorie in English. However, this physics text is not the source for the Calorie of nutrition. This article will outline the early usage and spread of the calorie as a heat unit from France to other countries during the 19th century. Origin and usage of the calorie in France The word calorie as a unit of heat seems to have been coined sometime between 1787 ( 1 ) and 1824 ( 12 ). Lavoisier studied specific heats of water and other materials and conducted some of the earliest experiments involving direct and indirect calorimetry ( 13 ). He named the calorimeter (calorimètre) by 1789 ( 14 ). Although Lavoisier was credited with coining “oxygen” and many new chemical terms, he did not include the calorie on his list of new words. Lavoisier's papers refer to calorique (caloric) and chaleur (heat) but not to the calorie as a thermal unit. At that time, “caloric” was regarded as a substance rather than a unit of heat. Lavoisier served on the 1791 Commission on Weights and Measures of the French Academy of Science and helped define the kg ( 5 , 15 ). A strict definition of the calorie would require metric units and Lavoisier used a quantity called the livre (about a pound) rather than the kg ( 16 ). He was executed by French revolutionaries in 1794 before the metric system was officially adopted in 1799 ( 13 ). The original metric system of 1795–9 defined the base units for length, area, volume, capacity, weight (not mass), and money along with various prefixes ( 13 ). It was intended as a means of simplifying trade and did not include derived scientific units for energy, electricity, or magnetism. These secondary units were not defined until after the Metric Convention of 1875, when the Bureau International des Poids et Mesures (BIPM) was formed. Several histories attribute the first usage of the calorie to sources that are too late. For example, Taton ( 7 ) indicates that Favre and Silbermann coined the term in 1852 ( 17 ). Others ( 6 , 18 ) also credit Favre for defining the calorie. However, the original publication states that the calorie was a well-known unit of physics (digital copies of L.N. Bescherelle's Dictionnaire National, Adolphe Ganot's Traite Elementaire de Physiques, and the original text from Favre and Silbermann's article on thermochemistry are available at the Gallica internet site: http://gallica.bnf.fr/ark:/12148/bpt6k34775c.table .): Nous répétons que l'unité que nous avons adoptée est celle adoptée par tous les physiciens (physicists), c'est-à-dire la quantité de chaleur nécessaire pour élever 1 gramme d'eau de 1 degré, et que l'on appelle unité de chaleur ou calorie. It is hard to assign priority to workers who state, “We repeat that the unit that we adopted is that adopted by all the physicists.” This is confirmed in the 1855 version (4th edition) of Ganot's basic physics text. This text contains a modern definition ( 11 ): …c'est pourquoi on est convenu de prendre pour unité de chaleur, ou calorie, la quantité de chaleur nécessaire pour élever de zéro à 1 degré la temperature d'un kilogramme d'eau. Because Ganot defines the calorie (not capitalized) as a 0–1°C kcal in relation to the heating of water without providing a reference, it seems evident that the calorie was well known. A French etymological dictionary lists the first occurrence of calorie as the 1842–3 volumes of Bescherelle's Dictionnaire national ( 19 , 20 ). The listing in the 1845 edition defines the calorie as Phys. Quantité de chaleur nécessaire pour élever un kilogr. d'eau un degré du thermomètre centigrade. The definition is similar to Ganot's more precise usage. Furthermore, the Dictionnaire historique de la langue française states that the word calorie was coined about 1819–24 and usage had become widespread by 1845 ( 12 ). Thus, French sources do not indicate that Lavoisier defined the calorie. By 1824, the word calorie was being used as a unit of heat ( 1 , 3 , 4 , 12 ). In 1819, Nicholas Clément began giving lectures on the theory of heat and steam engines at the Conservatoire des Arts et Metiers in Paris ( 21 ). Clément was regarded as an industrial chemist or chemical engineer. Among the students was Sadi Carnot ( 3 ) and class notes are available that were taken by L.B. Francoeur and J.M. Baudot ( 2 ). Both Clément and Carnot subscribed to the caloric hypothesis, which stated that heat behaved as a material substance and that its total amount was always conserved ( 2 ). The notes show that Clément defined a large (grande) and small (petite) calorie by 1823–4. A definition is written in Baudot's notes of 23 December 1824: La petite Calorie est la quantité de chaleur qu'il faut pour élever d'un degré la température d'un K.me d'eau. Assuming that the mass abbreviation refers to a kg, this defines a modern kilocalorie. Clément sometimes referred to a “grande calorie” as the heat needed to melt a kg of ice, which is ∼334 kJ/kg. This is different usage from the modern kcal, but the passage indicates that the calorie was known to engineers by this time ( 12 ). Medard ( 4 ) suggests that Clément may have coined the word calorie around 1820 but agrees that this is not certain, because Clément did not publish the definition. Other than the hand-written course notes, the first published use of the calorie was probably in 1825 in an anonymous description of Clément's course in a local journal called Producteur ( 4 ). Carnot used Clément's definition of heat units but not the name calorie in Reflexions On The Motive Power of Fire ( 3 , 22 ). Other engineers began using the calorie by 1829 ( 4 ), but the word apparently did not enter physics texts until much later. It is true that some scientists used heat units that would now be called g-calories in the 1850s. However, it was not until 1877–9 that Marcellin Berthelot stated that the “large” calorie equalled 1000 “small” 0–1°C g-calories and distinguished between them by capitalizing the abbreviation for the large Calorie ( 4 , 23 ). Although Medard states that the kcal was not introduced until 1935 ( 4 ), it was used in the context of daily energy expenditure in a U.S. medical physiology text from 1894 ( 24 ). A 14.5°C kcal was defined in German law in 1924 ( 3 ). It is not certain who first named the kcal. The metric system In France From 1795–9, a committee of the Académie des Sciences sought to define the meter as a unit of length related to an arc of the earth's circumference. The calorie could not be strictly defined without knowing the mass of water contained in a liter, which was based on a 10-cm cube. It is germane that the law of 1795 defined the g as “the absolute weight of a volume of pure water contained in a cube one-hundredth of a meter on a side at the temperature of melting ice” ( 25 ). This is important because the g-calorie eventually became a practical unit of heat, and the specific heat and mass of a volume of water depend on its temperature ( 26 ). Because the kg, g, m, and cm were all defined from the start, it is ambiguous whether the original metric system should be called m-kg-s or cm-g-s. However, the kg was considered to be the base unit of weight in the 1795 metric system. This is because the charge to the commission was to define standards for weights and measures used in commerce, and scientific concerns were secondary. Note that the original definition specified a weight rather than a mass, meaning that the original kg was defined as kg-force, similarly to a pound. Not until 1904 was the newton defined as a unit of force so that the kg became a mass unit. Because the g was not considered to be a base unit in the metric system ( 27 ) when the Calorie was defined, there was no need to add the “kilo-” prefix. The definition of a Calorie as the heat needed to raise the temperature of 1 kg of water by 1°C was the original usage and the kcal as 103 g-calories was not introduced until sometime between 1877 and 1894 ( 23 , 24 ). Indeed, the thermal g-calorie was not defined as 4.184 joules until 1902 and it was a 17°C (not 15°C) unit ( 28 ). The original metric system was short-lived as a French national standard. In 1812, Napoleon Bonaparte issued a decree to establish a “usual system” that changed metric values to conform to earlier weights and measures. Not until 1840 was a law passed to reinstate a metric system based on scientific standards ( 25 ). During this same period, European interest in thermodynamics, fuels, and electricity was high. Indeed, one of the reasons that Joule began his experiments on heat was a desire to reduce the cost of running equipment in the family brewery ( 29 ). Metric advocates worldwide recognized the simplicity of basing units of measure on the decimal system, and the metric system was promoted at world expositions in London (1851) and Paris (1867). Also in 1867, the International Geodetic Association formed a committee to investigate the use of the metric system in scientific measurements. The system was made legal for commerce in Britain (1864) and the United States (1866) and became compulsory in Germany (1868) ( 15 ). The International Metric Convention of 1875 gave scientists impetus to define accurate standards in electricity, magnetism, and thermodynamics. This led to the introduction of the CGS in 1896 in which the erg, dyne, and joule were defined. In 1918, the newton was added as a unit of force to the m-kg-s system and the joule was defined as 1 N-m of work. The ability to define the joule unambiguously in base units (amp-s) led to its adoption as the SI (Système International) unit of energy ( 30 ). During the 1930s, the BIPM convened the Consultative Committee on Thermometry to clarify standards of heat and W.H. Keesom served as president. In addition to reviewing the history of the calorie, Keesom summarized a proposal that the calorie should equal 1/860 watt-hours or 3600/860 joules (4.186 J) ( 3 ). From then on, any secondary thermal unit was to be defined relative to the joule rather than to the heating of water at any temperature. The 1948 General Conference also recommended discarding the calorie, because it cannot be derived directly from basic units. In 1954, the SI base units were adopted, and in 1970, the Committee on Nomenclature of the American Institute of Nutrition advised that the kilocalorie should be replaced by the kilojoule (kJ) in scientific publications ( 30 , 31 ). The Calorie in German Physiology The Calorie probably entered U.S. English because W.O. Atwater learned the term during studies in Germany, and not because it was defined in a newly translated physics text. Justus Liebig did not mention the calorie as such in his 1842 book on animal chemistry ( 32 ) or his paper on energy production from foodstuffs ( 33 ). However, he published J.R. Mayer's first scientific paper ( 34 ), which defined a mechanical equivalent of heat. Mayer self-published two intriguing papers that dealt partly with the efficiency of energy metabolism, which he estimated to be 15–20% ( 35 , 36 ). It was in the context of relating physical work against gravity (Fallkraft or potential energy; Fig. 2 ) to the energy supplied by foods that Mayer defined a kg-cal in 1846–8 ( 3 , 4 , 36 ). As translated by Lindsay ( 37 ) the quotation is: View larger version: Previous Section Next Section Conclusions This review raises several questions that should be answerable. First, Marcellin Berthelot defined a large calorie as 1000 g-cal ( 23 ) but did not call it a kilocalorie. A review of publications in engineering, chemistry, and physics between 1877 and 1894 should disclose a source for the name. Second, one does not know why Voit adopted the g-calorie in 1866 or why Atwater decided to use the Calorie (kcal) after visiting the Munich laboratory. It is likely that both heat units were being used in research publications and engineering handbooks. A review of French and German sources would clarify this question. Third, Nicholas Clément may have been first to coin and define the calorie, but it is possible that the term was already being used by other engineers and chemists. New evidence could falsify the hypothesis that Clément was first. In a larger context, the calorie made its way from France into international scientific literature during the 19th century and important work that is not mentioned here was being done in many other countries. The Calorie began to enter popular American vocabulary after Atwater explained the unit in his 1887 article in Century magazine. The most important avenue was probably the USDA Farmers' Bulletins ( 61 , 62 ), which provided the first U.S. food databases to be used in dietetics. Then, as now, American audiences were interested in managing weight, and the Calorie was soon introduced in articles and books. For example, Dr. Lulu Hunt Peters' best-selling “Diet and Health with Key to the Calories” specifically cited Farmers' Bulletin 142 as a source of information ( 65 ). Eventually, the Calorie was adopted for the nutrition facts panels on U.S. food labels. At present, there does not seem to be a movement by policy makers in the US to replace the Calorie with the kJ on nutrition information panels. Previous Section Next Section Acknowledgments J.L.H. thanks Dr. Patrick Reidenbaugh (University of Georgia Libraries), the staff of the BIPM, the Bibliothèque Nationale de France, and the Library of Congress for bibliographic assistance. Thanks also to Pat Naughtin of Geelong, Australia, for correspondence about the history of the Calorie. Manuscript received: August 21, 2006.
i don't know
What is a cylindrical coil of wire in which a magnetic field is created when an electric current is passed though it?
electromagnetism - Why does electricity flowing through a copper coil generate a magnetic field? - Physics Stack Exchange Why does electricity flowing through a copper coil generate a magnetic field? up vote 4 down vote favorite Can some one please explain to me why electricity flowing though a copper coil generates a magnetic field or where I could possibly find that information? Are there other materials that produce a magnetic field when a current is run through them in a different shape? Thanks! up vote 4 down vote accepted Can some one please explain to me why electricity flowing though a copper coil generates a magnetic field or where I could possibly find that information? An electric current (a flow of electric charge) has an associated magnetic field regardless of the material (or space) the flow occurs in. This is a fundamental part of electromagnetism, rooted in observation, and quantified in Ampere's Law . I which to emphasize that this phenomenon is considered to be fundamental in nature, which means, there cannot be a "more" fundamental explanation (for, if there were, electromagnetism would not be fundamental). up vote 3 down vote This is a very-basic question. There are lot more things to digest than just that in EM..! All because of Maxwell equations ... Both Electric & Magnetic fields are inter-dependent (i.e.) One field requires another (or) one field produces another. The phenomenon is called Electromagnetism . For example, consider an electric charge at rest (static). It produces an electric field. But when the charge is in motion (current), a magnetic field is produced perpendicular to its direction of propagation. Say, If you pass current through a straight wire, magnetic field is formed around the wire in the form of circular rings (could affect compass or metal fillings nearby). On the other hand, you're passing current through a circular spring-like thing (commonly, a coil) called solenoid, magnetic field is produced along its axis. Simply you could keep in mind that Magnetic field is produced by moving charges (current). This is an observed phenomena and it's explained by Maxwell. Your last question is "Ok to ask"... Yes, there are a lot of materials (mostly metals) that produce magnetic field when current flows through them. But, Shape is not at all "a matter". It's whether there's a change in the fields that matters...
Solenoid
Whose 'unified field theory' tried to explain the four fundamental forces in terms of a single, unified force?
Magnetic Effect of Electric Current | CBSE 10th Grade Study Materials | Radice Solenoid: A long cylindrical coil of insulated copper wire of l arger number of circular turns is called a solenoid. When a electric current is passed though a solenoid, it produces a magnetic field around it. Its magnetic field pattern is shown in the figure. When a current is passed through the solenoid, the current in each circular loop has the same direction, their magnetic effects get added up producing a strong magnetic field. Inside the solenoid, the magnetic field is almost uniform and parallel to the axis of the solenoid. The magnetic field produced by a solenoid is very much similar to that of a bar magnet. One end of the solenoid has N-polarity while the other end has S-polarity. The polarity of any end (face) of the coil can be determined by using clock rule. For all practical purposes, the magnetic field of a solenoid and that of a bar magnet can be taken identical. Factors on which the strength of a magnetic filed produced by a current carrying solenoid depends: Number of turns in the solenoid : The larger number of turns in the solenoid, stronger is the magnetic field produced. Strength of current : The larger the current passed through the solenoid, stronger is the magnetic field produced. Nature of the core material : By winding the coil over a soft iron cylinder, called core, the magnetic field can be increased several thousands times.
i don't know
What diverges rays of light, if it is concave?
Ray Diagrams - Concave Mirrors Reflection and the Ray Model of Light - Lesson 3 - Concave Mirrors Ray Diagrams - Concave Mirrors Spherical Aberration The theme of this unit has been that we see an object because light from the object travels to our eyes as we sight along a line at the object. Similarly, we see an image of an object because light from the object reflects off a mirror and travel to our eyes as we sight at the image location of the object. From these two basic premises, we have defined the image location as the location in space where light appears to diverge from. Ray diagrams have been a valuable tool for determining the path taken by light from the object to the mirror to our eyes. In this section of Lesson 3, we will investigate the method for drawing ray diagrams for objects placed at various locations in front of a concave mirror. To draw these diagrams, we will have to recall the two rules of reflection for concave mirrors: Any incident ray traveling parallel to the principal axis on the way to the mirror will pass through the focal point upon reflection. Any incident ray passing through the focal point on the way to the mirror will travel parallel to the principal axis upon reflection. Earlier in this lesson, the following diagram was shown to illustrate the path of light from an object to mirror to an eye. In this diagram five incident rays are drawn along with their corresponding reflected rays. Each ray intersects at the image location and then diverges to the eye of an observer. Every observer would observe the same image location and every light ray would follow the law of reflection. Yet only two of these rays would be needed to determine the image location since it only requires two rays to find the intersection point. Of the five incident rays drawn, two of them correspond to the incident rays described by our two rules of reflection for concave mirrors. Because they are the easiest and most predictable pair of rays to draw, these will be the two rays used through the remainder of this lesson   Step-by-Step Method for Drawing Ray Diagrams The method for drawing ray diagrams for concave mirror is described below. The method is applied to the task of drawing a ray diagram for an object located beyond the center of curvature (C) of a concave mirror. Yet the same method works for drawing a ray diagram for any object location. 1. Pick a point on the top of the object and draw two incident rays traveling towards the mirror. Using a straight edge, accurately draw one ray so that it passes exactly through the focal point on the way to the mirror. Draw the second ray such that it travels exactly parallel to the principal axis. Place arrowheads upon the rays to indicate their direction of travel.     2. Once these incident rays strike the mirror, reflect them according to the two rules of reflection for concave mirrors. The ray that passes through the focal point on the way to the mirror will reflect and travel parallel to the principal axis. Use a straight edge to accurately draw its path. The ray that traveled parallel to the principal axis on the way to the mirror will reflect and travel through the focal point. Place arrowheads upon the rays to indicate their direction of travel. Extend the rays past their point of intersection.   3. Mark the image of the top of the object. The image point of the top of the object is the point where the two reflected rays intersect. If your were to draw a third pair of incident and reflected rays, then the third reflected ray would also pass through this point. This is merely the point where all light from the top of the object would intersect upon reflecting off the mirror. Of course, the rest of the object has an image as well and it can be found by applying the same three steps to another chosen point. (See note below .)     4. Repeat the process for the bottom of the object. The goal of a ray diagram is to determine the location, size, orientation, and type of image that is formed by the concave mirror. Typically, this requires determining where the image of the upper and lower extreme of the object is located and then tracing the entire image. After completing the first three steps, only the image location of the top extreme of the object has been found. Thus, the process must be repeated for the point on the bottom of the object. If the bottom of the object lies upon the principal axis (as it does in this example), then the image of this point will also lie upon the principal axis and be the same distance from the mirror as the image of the top of the object. At this point the entire image can be filled in.     Some students have difficulty understanding how the entire image of an object can be deduced once a single point on the image has been determined. If the object is a vertically aligned object (such as the arrow object used in the example below), then the process is easy. The image is merely a vertical line. In theory, it would be necessary to pick each point on the object and draw a separate ray diagram to determine the location of the image of that point. That would require a lot of ray diagrams as illustrated below. Fortunately, a shortcut exists. If the object is a vertical line, then the image is also a vertical line. For our purposes, we will only deal with the simpler situations in which the object is a vertical line that has its bottom located upon the principal axis. For such simplified situations, the image is a vertical line with the lower extremity located upon the principal axis. The ray diagram above illustrates that when the object is located at a position beyond the center of curvature, the image is located at a position between the center of curvature and the focal point. Furthermore, the image is inverted, reduced in size (smaller than the object), and real. This is the type of information that we wish to obtain from a ray diagram. These characteristics of the image will be discussed in more detail in the next section of Lesson 3 . Once the method of drawing ray diagrams is practiced a couple of times, it becomes as natural as breathing. Each diagram yields specific information about the image. The two diagrams below show how to determine image location, size, orientation and type for situations in which the object is located at the center of curvature and when the object is located between the center of curvature and the focal point. It should be noted that the process of constructing a ray diagram is the same regardless of where the object is located. While the result of the ray diagram (image location, size, orientation, and type) is different, the same two rays are always drawn. The two rules of reflection are applied in order to determine the location where all reflected rays appear to diverge from (which for real images, is also the location where the reflected rays intersect). In the three cases described above - the case of the object being located beyond C , the case of the object being located at C , and the case of the object being located between C and F - light rays are converging to a point after reflecting off the mirror. In such cases, a real image is formed. As discussed previously , a real image is formed whenever reflected light passes through the image location. While plane mirrors always produce virtual images, concave mirrors are capable of producing both real and virtual images. As shown above, real images are produced when the object is located a distance greater than one focal length from the mirror. A virtual image is formed if the object is located less than one focal length from the concave mirror. To see why this is so, a ray diagram can be used.     Ray Diagram for an Object Located at the Focal Point Thus far we have seen via ray diagrams that a real image is produced when an object is located more than one focal length from a concave mirror; and a virtual image is formed when an object is located less than one focal length from a concave mirror (i.e., in front of F). But what happens when the object is located at F? That is, what type of image is formed when the object is located exactly one focal length from a concave mirror? Of course a ray diagram is always one tool to help find the answer to such a question. However, when a ray diagram is used for this case, an immediate difficulty is encountered. The incident ray that begins from the top extremity of the object and passes through the focal point does not meet the mirror. Thus, a different incident ray must be used in order to determine the intersection point of all reflected rays. Any incident light ray would work as long as it meets up with the mirror. Recall that the only reason that we have used the two we have is that they can be conveniently and easily drawn. The diagram below shows two incident rays and their corresponding reflected rays. For the case of the object located at the focal point (F), the light rays neither converge nor diverge after reflecting off the mirror. As shown in the diagram above, the reflected rays are traveling parallel to each other. Subsequently, the light rays will not converge on the object's side of the mirror to form a real image; nor can they be extended backwards on the opposite side of the mirror to intersect to form a virtual image. So how should the results of the ray diagram be interpreted? The answer: there is no image!! Surprisingly, when the object is located at the focal point, there is no location in space at which an observer can sight from which all the reflected rays appear to be diverging. An image is not formed when the object is located at the focal point of a concave mirror.     We Would Like to Suggest ... Why just read about it and when you could be interacting with it? Interact - that's exactly what you do when you use one of The Physics Classroom's Interactives. We would like to suggest that you combine the reading of this page with the use of our  Optics Bench  Interactive  or our  Name That Image Interactive . You can find this in the Physics Interactives section of our website. The  Optics Bench  Interactive  provides the learner an interactive enivronment for exploring the formation of images by lenses and mirrors. The  Name That Image Interactive  provides learners with an intensive mental workout in recognizing the image characteristics for any given object location in front of a curved mirror.   Check Your Understanding The diagram below shows two light rays emanating from the top of the object and incident towards the mirror. Describe how the reflected rays for these light rays can be drawn without actually using a protractor and the law of reflection.   See Answer   These two incident rays will pass through the image point for the top of the object. In fact, any light rays emanating from the top of the object will pass through the image point. Thus, merely construct a ray diagram to determine the image location; use the two rules of reflection. Then draw the reflected rays for the two given incident rays through the same image point.  
Lens
What can be expressed as the number of cycles of a vibration occurring per unit of time?
Curved Mirrors - Physics | Socratic Curved Mirrors Tip: This isn't the place to ask a question because the teacher can't reply. Post There are more differences than similarities. Explanation: DIFFERENCES: 1. Convex mirror is formed by the surface of a sphere which is facing away from the centre, where as concave is formed by the one facing towards the centre. 2. Convex mirrors(vex) diverges a parallel beam of light.But concave mirrors(cave) converges a parallel beam of light. 3. The above point implies that vex has focal length +ve according to convention, -ve focal length for cave. 4. vex can only form virtual images whereas cave can form both virtual and real images. 5. All these differences mean they have different applications. SIMILARITIES : 1. Both are part of a sphere. 2. The mirror formula can be applied for both. #1/v +1/u = 1/f, " "# - focal length (sign conventions have to be used while using these eqns.) 3. Image formation in both are based on the application of Laws of reflection at small scale i.e by taking the line joining a point on the mirror to the centre as the normal. Sudhish P. · 4 · 6 comments · Oct 12 2015 In optics, a focal point is a point at which rays or waves meet after reflection or refraction, or the point from which diverging rays or waves appear to proceed. Converging (convex) and diverging (concave) lenses all have two focal points--one on each side of the lens. Curved mirrors generally have just one focal point--in front for the case of concave mirrors, and behind the mirror in the case of convex mirrors. When parallel rays enter a convex lens parallel to its principal axis, they will be gathered at its focal point. Conversely, light originating from a point source located at the focal point of a convex lens will exit the lens in parallel rays. Parallel rays of light that strike a concave (parabolic) mirror will reflect off so as to be gathered at the focal point of the mirror. Conversely, light originating from a point source located at the focal point of the concave mirror will reflect off the mirror in parallel rays. That is how search lights are set up (also the bat signal). Not that the focal point is in general NOT the image location when an image is formed by a curved mirror or lens. Scholars J. · 1 · 8 comments · May 21 2014 Source http://www.cbakken.net/obookshelf/cvfocal.html A convex lens has two focal points - one on each side. They are equal distances from the lens. The lens does not have to have the same curvature on both sides for this to be true, and it doesn't depend on the direction the light takes entering the lens. It is the combined curvature that determines the focal point. Ashley · · 9 comments · Jul 22 2014 Examples of a curved mirror would be the rear view mirrors on the side of a car, inside of a telescope, in shopping malls to see around corners, or in fun houses to give your body different shapes. Here is an example of a curved mirror used to see around corners: Here is a mirror used in grocery stores for workers to see customers: These mirrors have many uses, such as being able to see around corners, using multiple mirrors together to gain an image in a telescope, and in cars to make rear view mirrors more useful. Spencer W. · · 7 comments · Aug 27 2014
i don't know
What is the product of the mass of a body and its linear velocity?
What is the difference between force (F=MA) and momentum (mass * velocity)? - Quora Quora Written May 2, 2015 Formula wise, you just said it. By definition, momentum refers to the quantity of motion that an object has. And the rate of change of that momentum is defined as the force. Written May 6, 2015 Force and momentum are two concepts that are used in mechanics to describe statics or dynamics of bodies. Both force and momentum are vectors.  Force is an external cause, while momentum is an internal property of matter. A force is required to change the momentum of any object. The force on an object can be defined as the change of momentum per unit time.  The definition of force is “any influence that causes or attempts to cause a free body to undergo a change in the acceleration or the shape of the body.” There are two main types of forces  - contact forces and field forces. Contact forces are forces that are used in everyday incidents such as pushing or pulling an object. Field forces include gravitational force, magnetic force, and electric force. Forces such as static friction, surface tension, and reactive forces are all responsible for keeping the objects in static conditions. Forces such as gravitational force, electrical force, and magnetic force are all responsible for keeping the world and everything in the universe together. Momentum is a measurement of the inertia of an object. It is divided into two main types. One is the linear momentum, and the other is the angular momentum. Linear momentum is defined as the product of the mass and velocity of the object. Angular momentum is defined as the product of moment of inertia and angular velocity of the object. Both these are measurements of the current inertia of the system. Angular momentum is related to the rotation or revolution of matter. It is, in effect, a measure of the quantity of rotation of a system of matter, taking into account its mass, rotations, motions and shape.  Linear momentum is also a conserved quantity, meaning that if a closed system is not affected by external forces, its total linear momentum cannot change. This fact, known as the law of conservation of momentum, is implied by Newton's Laws of Motion. A change of momentum always requires a net force or torque acting upon the object. Momentum is a relativistic variant. Written Feb 17, 2016 Momentum measures the 'motion content' of an object, and is based on the product of an object's mass and velocity. Momentum doubles, for example, when velocity doubles. Similarly, if two objects are moving with the same velocity, one with twice the mass of the other also has twice the momentum. Force, on the other hand, is the push or pull that is applied to an object to CHANGE its momentum. Newton's second law of motion defines force as the product of mass times ACCELERATION (vs. velocity). Since acceleration is the change in velocity divided by time, you can connect the two concepts with the following relationship: force = mass x (velocity / time) = (mass x velocity) / time = momentum / time Multiplying both sides of this equation by time: force x time = momentum
Momentum
Which quantity has direction as well as magnitude?
How is force related to momentum? How is force related to momentum? Asked by: Melissa Thomas Answer Momentum measures the 'motion content' of an object, and is based on the product of an object's mass and velocity. Momentum doubles, for example, when velocity doubles. Similarly, if two objects are moving with the same velocity, one with twice the mass of the other also has twice the momentum. Force, on the other hand, is the push or pull that is applied to an object to CHANGE its momentum. Newton's second law of motion defines force as the product of mass times ACCELERATION (vs. velocity). Since acceleration is the change in velocity divided by time, you can connect the two concepts with the following relationship: force = mass x (velocity / time) = (mass x velocity) / time = momentum / time Multiplying both sides of this equation by time: force x time = momentum To answer your original question, then, the difference between force and momentum is time. Knowing the amount of force and the length of time that force is applied to an object will tell you the resulting change in its momentum. Answered by: Paul Walorski, B.A., Part-time Physics Instructor Answer They are related by the fact that force is the rate at which momentum changes with respect to time (F = dp/dt). Note that if p = mv and m is constant, then F = dp/dt = m*dv/dt = ma. On the other hand, you can also say that the change in momentum is equal to the force multiplied by the time in which it was applied (or the integral of force with respect to time, if the force is not constant over the time period). Interestingly enough, this, along with Newton's Third law, gives us conservation of momentum. Newton's Third law says that for a force exerted by object 1 on object 2, object 2 exerts a force on object 1 that is equal in magnitude and opposite in direction to the force object 1 exerts. Or, more succinctly, F[1->2] = -F[2->1]. Now the total change in momentum for any interaction is the integral of F[1->2] over time plus the integral of F[2->1] over time, which equals the integral of F[1->2] minus the integral of F[1->2], which equals zero - always! A similar argument for conservation of energy can be made using the fact that energy is the integral of force with respect to position. Answered by: Gregory Ogin, Physics Undergraduate Student, UST, St. Paul, MN Answer Newton's 2nd Law tells us that force = mass x acceleration ( F = ma ). Since acceleration is just how velocity changes over time, we can write this as F = m * v/t Multiply both sides by time to arrive at F t = m v Since mv is momentum, we can see that the momentum conferred to an object by a force equals the force times the time the force is applied. Thus if a 15 Newton force to the right is applied to an initially stationary object for 3 seconds, it will have a momentum of 45 kg m/s to the right. Most students who ask this question are usually trying to figure out the reverse situation, however. If an object hits me with a certain amount of momentum, how much force does it hit me with? Note that due to Newton's 3rd Law, this can be calculated the same way. If a thrown egg hits your hand with a momentum of 5 kg m/s, the force it applies to your hand depends on the time it takes for your hand to absorb the momentum. If you hold your hand very stiffly (and try to make the egg stop in a very short period of time) the ball exerts a high force on your hand, e.g. 100 N for 1/20th of a second. However as anyone who has ever played in an egg toss knows, if you let your hand 'give' and extend the amount of time it takes to absorb the momentum, the egg exerts a smaller force on your hand, e.g. 10 N for 1/2 a second. Answered by: Rob Landolfi, Science Teacher, Washington, DC
i don't know
What is the SI unit of magnetic flux density, named after a Croatian electrical engineer?
Enter a value into either text box and select units using the drop-down boxes. = What is Magnetic Flux Density? Magnetic flux density is the amount of magnetic flux per unit area of a section that is perpendicular to the direction of flux. It is also sometimes known as "magnetic induction" or simply "magnetic field". It can be thought of as the density of the magnetic field lines - the closer they are together, the higher the magnetic flux density. Mathematically it is represented as B = Φ/A where B is magnetic flux density in teslas (T), Φ is magnetic flux in webers (Wb), and A is area in square meters (m2). The SI unit for magnetic flux density is the tesla which is equivalent to webers per square meter. The unit was named in 1960 after the Serbian-American electrical engineer Nikola Tesla. Bookmark this page in your browser using Ctrl and d or using one of these services: (opens in new window)
Tesla
What is studied in the science of cryogenics?
Tesla | Article about tesla by The Free Dictionary Tesla | Article about tesla by The Free Dictionary http://encyclopedia2.thefreedictionary.com/tesla Related to tesla: Nikola Tesla tesla (tĕs`lə), unit of magnetic flux density: see under weber weber [for W. E. Weber], abbr. Wb, unit of magnetic flux in the mks system of weights and measures; 1 Wb is equal to 1 volt-second. The weber per square meter, called the tesla [for Nikola Tesla], abbr. ..... Click the link for more information. . tesla (tess -lă) Symbol: T. The SI unit of magnetic flux density , defined as one weber of magnetic flux per meter squared. One tesla is equal to 104 gauss. Tesla   the unit of magnetic induction, or magnetic flux density, in the International System of Units. It is equal to the magnetic induction at which the magnetic flux through a cross section with an area of 1 m2 is equal to 1 weber. The tesla was named after N. Tesla. Its international designation is T. The tesla equals 104gauss. tesla
i don't know
What is the favorite food of the giant panda?
Bamboo and Panda Bears Bamboo, Favorite Food of Pandas   Bamboo is the favorite food of Giant Panda bears. It is grown all over the Panda's habitat. But it also can and is grown in many other parts of the world. In th U.S., many varieties have been imported and flourish in home gardens. They are popular in home gardens where Japanese or asian setting are desired. For others, they just like the challenge of growing something different. Bamboo Trivia: An adult Giant Panda eats 45 pounds of bamboo per day. Bamboo can be grown in most climates. It also requires little care and attention, thriving and spreading profusely through it's root system . It is strongly recommended that gardeners create a barrier two to three feet deep around the planting area to contain the spread of the roots to areas beyond where the plants are desired. Not only is bamboo grown as an ornamental plant, but the shoots are edible. If you eat a variety of chinese food, chances are you have eaten bamboo. Like rice, it has little flavor of it's own and takes on the flavor of the food it is cooked with. While nutritional value is low, it is a good source of fiber. Low nutritional value is one reason Giant Pandas need to eat so much of it. Bamboo shoots command a fairly high market price so it can make a good cash crop. Bamboo makes great gifts. It is a sign of luck. It is a sign of love. Put it all together on Valentine's Day and you are "lucky in love". For a nice selection for Valentine's Day, other events, or just for you. Did You Know? Bamboo leaves are poisonous. So, think twice about planting if you have young children. More Information:
Bamboo shoot
What kind of an animal is a marmoset?
Bamboo and Panda Bears Bamboo, Favorite Food of Pandas   Bamboo is the favorite food of Giant Panda bears. It is grown all over the Panda's habitat. But it also can and is grown in many other parts of the world. In th U.S., many varieties have been imported and flourish in home gardens. They are popular in home gardens where Japanese or asian setting are desired. For others, they just like the challenge of growing something different. Bamboo Trivia: An adult Giant Panda eats 45 pounds of bamboo per day. Bamboo can be grown in most climates. It also requires little care and attention, thriving and spreading profusely through it's root system . It is strongly recommended that gardeners create a barrier two to three feet deep around the planting area to contain the spread of the roots to areas beyond where the plants are desired. Not only is bamboo grown as an ornamental plant, but the shoots are edible. If you eat a variety of chinese food, chances are you have eaten bamboo. Like rice, it has little flavor of it's own and takes on the flavor of the food it is cooked with. While nutritional value is low, it is a good source of fiber. Low nutritional value is one reason Giant Pandas need to eat so much of it. Bamboo shoots command a fairly high market price so it can make a good cash crop. Bamboo makes great gifts. It is a sign of luck. It is a sign of love. Put it all together on Valentine's Day and you are "lucky in love". For a nice selection for Valentine's Day, other events, or just for you. Did You Know? Bamboo leaves are poisonous. So, think twice about planting if you have young children. More Information:
i don't know
Which plant has flowers but no proper leaves?
Learn How To Care for House Plants | Teleflora AFRICAN VIOLET plant care A healthy African violet will bloom for nine months and then rest for three. Despite their delicate appearance, they are not difficult to care for. Keep their soil moist to dry and allow it to dry out between waterings to encourage blooming. Because water can damage their leaves, always water them from the bottom by placing the container in a tray of water. Allow the plant to absorb the water for about 30 minutes. Place your African violet in moderate to bright, indirect light, and avoid exposing them to sudden temperature changes. Pinch off wilted blossoms and leaves to encourage blooming, and fertilize monthly or when the plant is actively growing new leaves and buds. Shop for Blooming Plants AGLAONEMA "Chinese Evergreen" plant care Aglaonema, also known as Chinese evergreen, are very tolerant plants that do well in a range of environments. They prefer medium to low light in a warm room with slightly higher humidity, but they'll adapt to a spot that's slightly dryer and brighter (they make nice plants for the bedroom or bathroom). Allow their soil to dry out a bit between waterings (though, avoid letting it become bone dry), and gently clean off their leaves on a regular basis. Aglaonema, also known as Chinese evergreen, are very tolerant plants that do well in a range of environments. They prefer medium to low light in a warm room with slightly higher humidity, but they'll adapt to a spot that's slightly dryer and brighter (they make nice plants for the bedroom or bathroom). Allow their soil to dry out a bit between waterings (though, avoid letting it become bone dry), and gently clean off their leaves on a regular basis. AMARYLLIS plant care The amaryllis is native to warmer climates. The showy funnel-shaped blossoms stand atop a single stalk stem. Occasionally the flowers' weight will require some support for the stem. A simple bamboo stake and raffia tie can support the stem and be a decorative addition to the plant. Some amaryllis are frequently given as a gift in bulb form. Place your amaryllis in a bright, warm room at first, but when buds appear and begin to color, move it to a cooler spot to prolong blooming time. Water it moderately, keeping the soil moist but not soggy, and avoid letting it sit in water. Once it stops flowering, continue to give your amaryllis four hours of full sunlight so allow the leaves to collect solar energy to nourish the next year's blooms. Cut off the flowers once they fade, and cut down the stems to their base when they wither. Be sure to water and care for it as long as it has leaves, then let the leaves wilt naturally (but don't remove them). Keep the dormant bulb in its pot in a cool, dry place, and then replace the top inch or two of soil and start watering it when it begins to sprout again. ARECA PALM plant care Areca palms are generally hardy plants and prefer medium to bright light. Keep their soil moist but not soggy. If you allow the soil to become too dry, areca palms wilt dramatically, but it's easy to revive them with just a little water (though some of their fronds may turn yellow). Trim back palm fronds that become damaged or turn brown. AZALEA plant care Azaleas prefer cool, well-lit spots (out of direct sunlight) with temperatures between 60-65 degrees Fahrenheit. Check the soil frequently, and keep it moist but not soggy; never allow it to dry out completely. Allow new growth to develop, and regularly remove any dead flowers. When it's finished flowering, you can replant your azalea in a larger container or move it outdoors, as long as there's no risk of frost. Some cultivated varieties of azaleas are designed for inside use only. Others are "hardy" varieties that can be planted in the garden in warmer climates. Be sure to ask your florist what type of azaleas they carry. BONSAI plant care Display your bonsai in a spot that gets a good amount of bright, indirect light. Keep its soil moist to dry, watering it every 2-3 days from the bottom by submerging its planter in water (just to the top of the soil) to allow its roots to absorb water for about 15 minutes. Pinch off or prune new growth (avoiding any flowering buds) to help it maintain its shape, and "root prune" your bonsai once a year in late winter by gently taking it out of its planter and trimming about a third of its roots. Fertilize it monthly when it is actively growing new buds or leaves. BOSTON FERN Many cultivars of Boston fern are available some are compact forms, others are large spreading plants with graceful fronds up to two feet in length. Ferns like bright diffused light and moist soil. Prune dead fronds from the plant immediately and keep humidity near the plant high for best results. BROMELIAD care Native to Central and South America, bromeliads are a large family of plants – all with a similar rosette of stiff leaves and some kind of bright central flower spike or colored leaf area. They're tough, easy-going plants, preferring bright, indirect light or direct sun. Keep their soil moist to dry, and pour the water in the center of the plant where the leaves join together, allowing it to drain into the soil. Avoid letting the plant sit in water. If you live in a hard water area, use rainwater or distilled water whenever possible, as bromeliads are very sensitive to salts, which may cause their leaves to turn brown at the tips. CACTI & SUCCULENT plants Hardy cacti and other succulent plants are accustomed to desert conditions and prefer bright, indirect light or direct sun. Water them thoroughly and evenly, allowing them to dry out completely in between waterings. If the soil becomes too hard and causes water to run off, place the pot in water just to cover the soil, and allow it to soak for about 30 minutes. CALLA plant Callas thrive in slightly cool, sunny spots, especially after their blooms open. Check their soil frequently and keep it moist but not soggy. These bulb-like plants grow from rhizomes, and when they're finished blooming, you can plant them outdoors in mild climates. They need a winter rest period before reblooming, so allow them to dry out over the winter. CHRISTMAS CACTUS The Christmas cactus, with flowers ranging in color from yellow, salmon, pink, fuchsia and white (or combinations of those colors), isn't just for the holidays – it can be grown indoors throughout the year. While it adapts to low light, it will produce more abundant blooms if you place it in a sunny spot. In the summer, you can move it outdoors, but keep it in a shady or semi-shady location, since too much direct sunlight can burn the leaves. When it's time to bring it back indoors in the fall, do so gradually to allow it to adjust. Since it relies on shortened daylight in the fall to induce budding, help it along by placing it in a room that receives no additional evening light. Once buds begin to appear again, bring it back into the living room or kitchen. Despite its name, the Christmas cactus isn't a true cactus, and it's not quite as drought tolerant as you might think, so water it thoroughly when the top half of the soil feels dry to the touch. During the summer, keep it continually moist, and when fall arrives, water it only well enough to prevent wilting. During the month of October, give it no water at all, and then cautiously resume watering in November, being careful not to let the stems get full from over watering. CHRYSANTHEMUM plants Chrysanthemums like bright light, place them near an open window to encourage their buds to open (but avoid allowing them to be exposed to direct sunlight once in bloom, as that can burn their flowers). While they're in-bloom, water them every two days or so – even at the risk of over watering, since wilting will shorten their life. When not in bloom, keep their soil moderately moist, watering thoroughly only when the soil surface feels dry to the touch. CYCLAMEN plant Characterized by heart-shaped leaves and blossoms that fly over the leaves like miniature colorful birds, cyclamen plants are sensitive to over watering and under watering. Keeping the soil moist (not wet) to the touch is the trick to having them last long in a home or office setting. Allowing the plant to dry out will prevent unopened buds from opening and maturing. Spent blossoms should be removed immediately to make space for new buds to open and unfurl. Cyclamens prefer cooler temperatures 55 to 65 degrees F and diffused bright light. DAFFODIL "Narcissus" plant As trumpets of spring, pots of daffodils like moist soil and cool temperatures with high light sources. Placing the plants in low light once they are blooming can cause "stem topple" where the stems that emerge from the bulbs become askew instead of being parallel in the container. Using a series of bamboo stakes and some twine or raffia - the stems and foliage can be corralled within the dimensions of the pot. Depending on their stage of openness, daffodils can last from 5 to 12 days. DIEFFENBACHIA plant Dieffenbachias are popular plants because they thrive so well indoors and handle a wide range of light conditions, though they do best when you place them in bright, indirect light. In the winter, make sure they receive more light. Water your dieffenbachia thoroughly, allowing the top 1-2 inches of soil to dry out in between waterings. Allowing the plant to become bone dry will cause it to wilt. EASTER LILY plant Easter lilies prefer moist soil and diffused light. As the blossoms open, you may want to remove the yellow anthers with a tissue to prevent the white blossom from discoloring and the pollen from damaging clothing or home surfaces. As flowers pass their prime, they can be removed to keep the plant looking healthy and to make room for new buds to open. FICUS plant While your ficus plant is adjusting to its new home, it may drop a noticeable amount of leaves. This is normal, and with proper care, it will begin to thrive again in no time. Just pick up the fallen leaves, remove the yellow ones still on the plant, and cut off dead and dry twigs. This will help the light penetrate to the inside foliage and promote new growth. Be careful not to over water your ficus. Feel the soil with your finger tip, and if it feels dry to the touch one inch below the surface, it's time to water it – but if the soil feels moist, hold off for a day or two. Keep in mind that your ficus will need less water during the winter. When your ficus is new, mist it daily as well. To provide proper humidity and prevent the roots from standing in water, place the planter on a plant tray or saucer filled with gravel. Display your ficus in a bright spot with indirect light, away from drafts and large windows that change temperature throughout the day. Use plant fertilizer monthly throughout the growing season, but not during the winter months. GARDENIA plant Kept indoors, gardenia plants like well-ventilated spots that get at least five hours of sunlight a day, but if you move your plant outdoors in the summer, be sure to keep it in a shady area. To keep its soil moist but not soggy, soak it thoroughly until you see water running out of the drainage holes, but don't allow the container to stand in water. HYACINTH plant Hyacinths thrive in bright locations, and do best when their soil is kept moist but not soggy. Water yours thoroughly and then place it in a sink (or outside if the weather is mild) to allow the water to drain completely. The stems of these bulb flowers can be supported with decorative bamboo stakes and raffia ties to prevent the weight of the flower from causing the stems to topple in the pots. When your hyacinth is finished blooming, you can replant the "forced" bulbs in your garden in the fall. They will take a few years to fully recycle and bloom abundantly. HYDRANGEA plant care Place your hydrangea in a sunny, bright spot that receives indirect light, and keep its soil moist by watering it thoroughly and allowing excess water to drain. In the fall, allow it to rest and lose its leaves by placing it in a cool, dark location (a basement or cellar) without water. In January, bring it out again to a spot with plenty of light, and it should bloom in time for spring. IVY Although ivy can survive in a range of temperatures, it's more vulnerable in the winter with dry air from heating. To make sure it gets enough moisture, set the planter on a tray or saucer filled with pebbles and water. Display your ivy in a bright spot with indirect sunlight. In the summer, you can move it outdoors to a protected area, but make sure it's out of direct sunlight, which can burn its leaves. KALANCHOE plant Known for their bright small flowers in abundant clusters atop waxy leaves, kalanchoe plants are among the longest lasting blooming plants you can have in your home or office. Keep the soil moist, but not overly saturated. Do not allow the pot to sit in a pool of water. Pinch off blooms as they pass their prime. These plants could last 3 to 4 weeks depending on the room's temperature. Sensitive to cold temperatures, storage below 40 degrees F could cause foliage to become soft and damaged. ROSE bush plant Kept indoors, rose plants will do well in bright, indirect light. Keep their soil moist, allowing it to dry out a bit in between waterings. Remove any leaves that have yellowed while indoors, and pinch off spent blossoms to encourage new blooms. ORCHID plant Despite their elegant, graceful appearance, orchids aren't difficult to care for, and by following a few simple guidelines, many varieties will bloom for you again next year. Keep your orchid in a well-ventilated spot with partial shade, away from radiators, air conditioning, and strong drafts. To help maintain the right level of humidity, set the planter in a tray of pebbles and water so that the pot sits out of the water. This prevents the roots from rotting, and allows the moisture to circulate. Orchids gain their water from the relative humidity in the atmosphere, they do not absorb water in a traditional way from the roots and soil. For stability, orchids are often potted with the roots in a growing media that should not be overly wet. Orchids require a period of dormancy during the winter in order to bloom again in the spring, so allow it to rest in a sunny spot, and don't water it at all during this time. When its blooms are gone, cut the spike an inch above the foliage, leaving the old canes in place. ORIENTAL LILY plant Display your Oriental Lily in well-ventilated spot with bright, indirect sunlight, taking care to keep it away from air conditioners, heaters, and strong drafts. While in bloom, water it whenever the soil feels dry to the touch, and feed it with a water-soluble fertilizer. When it's done blooming, you can replant it outdoors in mild climates. Since pollen can stain clothing and furniture, carefully remove the anthers (the orange coated tips at the end of the stamens) with a tissue. PAPER WHITES Your fragrant, pre-sprouted paperwhite narcissus will bloom within about 2-3 weeks if kept in bright, diffused light. Water them thoroughly when the soil is dry to the touch, but don't allow the plant to stand in water. Cool-weather plants at heart, your paperwhites will bloom longer if kept away from heat. POINSETTIA plant Even though we associate poinsettias with the mid-winter holidays, they're actually a tropical plant and need to be kept away from drafts and cold. Too chilly temperatures can cause their colorful leaves (called bracts) to drop. Keep the soil of your poinsettia moist and allow it to dry out only slightly in between waterings. Encourage new blooms by pinching off spent blossoms and adding plant fertilizer when it's actively growing new buds or leaves. Poinsettias can also be cut from the plant and used as cut flowers. When you cut a stem, a milky-white sap flows from the cut end. Place the stem in water immediately to allow it to hydrate before mixing it with other flowers. PHILODENDRON plant Native to the jungles of tropical America, your philodendron prefers medium, indirect light and it will do best in a bright spot with indirect sunlight. (If its new leaves develop smaller and farther apart, it's a sign that it's receiving too little light.) Many types exist including large split-leaf varieties, and the distinctive ruffled-edge philodendron xanadu. Keep its soil evenly moist, but allow it to dry out in between waterings. In the winter months, when growth slows, keep it slightly drier. Over watering will cause the leaves to turn yellow, while under watering will cause them to turn brown and fall off. Philodendrons tolerate the natural levels of humidity found in most homes, but because of their tropical origin, they respond particularly well to high humidity, so mist them regularly to promote lush growth and shiny foliage. Dust their leaves with a damp cloth and feed them houseplant fertilizer in the spring and mid summer. SCHEFFLERA ARBORICOLA Your new schefflera plant may thin out a bit and loose some leaves as it adjusts to its new home. This is normal, and with proper care, it will begin to thrive again in no time. Although it adapts to a wide variety of light levels, the schefflera arboricola prefers medium to higher light, which keeps it full and more compact. If your plant does stretch out, don't be afraid to prune it – it can handle even an occasionally radical pruning and come back strong. Scheffleras don't like to sit in water, but do best when their soil is kept moist. If its leaves begin to turn black and drop off, it's a sign that the soil has stayed too moist. On the other hand, if you notice that the tips of the plant begin to wrinkle, you've allowed it to get too dry. Wipe its leaves-both the tops and undersides-with a damp cloth to remove dust and prevent spider mites. SPATHIPYLLUM With dark green leaves that can be more than a foot long, the spathiphyllum plant produces hood-shaped white blooms, and in some cases, can grow up to 4 feet tall (although many varieties are developed to be compact). These plants can wilt easily, so it's important to keep the soil moist, providing good drainage and emptying excess water from trays or saucers to prevent their roots from rotting. Display them in a spot with bright, indirect light. Low light slows their blooming cycle, and too much direct sunlight may cause burn spots on their leaves. Wipe their leaves with a damp cloth to remove dust.
Cactus
What name is given to animals that eat both flesh and plant material?
Cactus Flowers   Cactus Flowers             Cactus flower anatomy has received little attention. Flowers are large and showy in many genera, with dozens of petals and stamens rather than just the 5 or 10 of each as we might expect of a dicot. All cacti, other than a few species in Pereskia, have an inferior ovary, and that is the point of this portion of the website. The ovaries of cacti are not merely inferior, they are spectacularly inferior.   Carnegiea gigantea (saguaro); notice the hundreds of stamens. Click on any photo for a larger image.   Weingartia hedingiana flowers; notice the scales on the exterior of the flower tube.   Trichocereus lamprochlorus.             Although axillary buds produce flowers, they always produce a bit of stem tissue first: the very first appendage primordia are leaf primordia, not sepal primordia. In most cacti, the axillary bud apical meristem actually produces 20 or 30 or more leaf primordia before it actually switches over to producing floral appendage primordia. The noteworthy thing is that these leaf primordia typically develop much more than do other leaf primordia on the same plant, and they mature into large scales that may be a centimeter or more long and have a thin, flat lamina-like region. People typically do not realize that these cactus leaves are true leaves but they are indeed. They are usually called �scales� or �bracts� but that does not matter: they are leaves. The problem is that the ovary is so deeply inferior that these leaves appear to be part of a flower rather than part of a vegetative stem.   Flowers of Soehrensia bruchii: upper red structures are petals and are parts of the true, flower; lower green tube-like parts are stems which bear leaves (usually called scales). The stem will develop into the outer part of the fruit (but because the outer part develops from something other than a carpel, it is more properly called a false fruit). Click on any photo for a larger image.   Shoot leaves on the flower tube of Rauhocereus riograndensis.   Large leaves on the shoot tissue surrounding the flower of Browningia candelaris.             The confusing nature of the stem and flower is due to the following modified development. The axillary bud apical meristem produces its series of leaf primordia, then a typical series of sepal, petal, stamen and carpel primordia, each above the other on the sides of the shoot apex as happens in any other group of plants. But as these primordia begin to enlarge and differentiate, peripheral shoot tissue below the leaf primordia elongates more than the inner shoot tissue below the carpel primordia. Whereas in ordinary stems all parts of the stem from pith to epidermis elongate exactly the same amount, here the pith lags behind while the epidermis and outer cortex extend, so the flower and subtending shoot turn themselves inside out. Imagine partially inflating a cylindrical toy balloon, then painting on dots to represent the leaf, sepal, petal, stamen and carpel primordia. Once all dots have been painted on with carpel dots at the closed end of the balloon, imagine placing your finger on the end of the balloon then blowing it up further: the balloon will become a hollow cylinder with the stamen dots on the inside next to your finger, the petal and sepal dots at the end and the leaf dots on the outside.        In a cactus flower, the hollow tube has leaves arranged in ordinary phyllotactic spirals on the outer surface almost until the rim of the tube is reached, then there are sepals and petals at the uppermost portions of the outside of the tube as well as at the rim, then down the inside of the tube are the numerous stamens with the first-formed ones at the top near the rim and the last-formed ones at the bottom of the tube�s interior near the carpels. Consequently, when you look at a cactus flower bud, you are not seeing any flower tissue at all except for the sepals and petals at the very tip; all the rest is vegetative stem and leaf, and the true flower parts are located inside. It is easier to comprehend that this really is vegetative tissue by realizing that these leaves typically bear clusters of spines in their axils. Need more proof? In Pereskia sacharosa and in Opuntia fulgida, the axillary buds of the leaves can themselves produce flowers so there are flowers borne on flowers. Opuntia fulgida can repeat this many so many times that as the flowers each mature into fruits, there are fruits hanging from fruits hanging from fruits�� the common name is �chain fruit cholla.� Because the outer part of a cactus �flower� is really vegetative tissue, once this all ripens into a fruit, the outer parts develop into a false fruit and the inner parts � those derived from the carpels � constitute the true fruit (remember that in apples, most of us only eat the false fruit and we throw away the true fruit, the core).   Flower of Neocardenasia herzogiana showing that stamens are attached on the inside of the flower tube. Click on this photo for two larger images and more explanation. Click any photo for a larger image.   Fruit of Cereus forbesii; the innermost red region is composed of funiculi (the stalks of the seeds and ovules), the white layer around that is the true fruit (developed from the carpels) and the greenish-red outermost region is false fruit (developed from the stem tissue that surrounds the ovary).   Fruit of Monvillea kroenleinii; the black seeds are surrounded by white funiculi; outermost green part of "fruit" is false fruit (stem tissue), the thin white layer between the false fruit and the seeds is the true fruit.             I have added this part to the website because the presence of leaves on the flower buds brings up an important point: most cacti still have the genes necessary to make large, flat, photosynthetic leaves. However, those genes are strictly repressed in the ordinary succulent stems of cacti, even though many live in areas that are moist enough that it would appear to be advantageous to have leaves at least during the rainy season. It could be � and this is a wholly unexplored hypothesis � that cacti are indeed making ephemeral leaves at exactly the time that they need them: when ovules are maturing into seeds and carpels are developing into fruits. The presence of leaves on the flowers may truly boost photosynthesis exactly when and where an extra boost is needed. On the other hand, in most cacti the extra amount of photosynthetic surface area produced by the flower�s leaves seems inconsequential compared to the total green surface area of the shoot, and their photosynthetic output seems trivial compared to all the starch that could be stored in the stem throughout the year as the evergreen stem photosynthesizes month after month.             A confounding aspect is that in some genera, there are no leaves or scales on the flowers, and these have been interpreted as having the same organization as other cactus flowers, but with the vegetative part consisting of just a single internode rather than many.   Fruit (only the false fruit portion is visible without dissection) of Acanthorhipsalis monacantha is smooth and mostly without leaves or axillary buds/spines (the  white dot near the top is one axillary bud), because it is composed of just one or two internodes rather than many. Click on any photo for a larger image.   This fruit (false fruit visible) of Haageocereus turbidus has numerous axillary buds (about 12 are visible), each with just some hairs but no spines and the subtending leaves are microscopic. This false fruit is composed of numerous internodes.   This immature fruit of this Harrisia divaricata shows its stem-nature very well -- note the numerous leaves and the swollen leaf bases (in a vegetative stem of cactus these would be called tubercles). The true fruit (the carpel wall) is located interior to this green false fruit.             Another aspect of extremely inferior ovaries in cacti: in Calymmanthium, the vegetative tissues extend to the very rim of the tube as expected of cacti, but the rim itself does not expand much during flower development. The rim remains a microscopically narrow hole that is the same diameter as when it was formed by the floral apical meristem whereas in all other cacti the rim grows along with the rest of the flower, forming a wide rim that bears many petals (look at the flowers on the top of this page). Because the rim does not widen during development, a Calymmanthium flower develops inside a pouch of vegetative tissue -- when the flower is ready to open, it actually has to tear the vegetative tissues apart before the petals can emerge.   Flower of Calymmanthium fertile. Whereas the petals of most cactus flowers begin to be inserted at the top of the flower tube, the rim continues to be stem tissue in Calymmanthium. Because all flower parts begin as microscopically small primordia, when the tip of this tube was formed, it too was microscopically small -- and it stayed that way. It did not grow into the large opening like that on the flowers on the top of this page. Consequently, a Calymmanthium flower grows very well protected inside a vegetative pouch, but when it needs to open, it has to fight its way out -- the flower must actually rip the pouch apart. Calymmanthiums are big, rambling bush-like cacti; we have several at the University of Texas, but so far they have not yet become large enough to bloom here. This specimen was provided by Dr. Jean-Marie Solichon, Director of the  the Jardin Exotique in Monaco (the Jardin Exotique is a superb collection of cacti and succulents and is well worth a visit). Click the photo for a larger image.             As mentioned, the biological consequences of cacti having leaves and vegetative tissues completely surrounding their flowers is an area that has not been explored at all and would make a good project for someone.     I haven't done any research on flowers yet, so for further reading in this area, I recommend my two books again, Botany and A Cactus Odyssey. [end Flowers page]   
i don't know
Which flightless marine birds of the southern hemisphere live in rookeries?
Penguin Facts: Species & Habitat Penguin Facts: Species & Habitat By Alina Bradford, Live Science Contributor | September 22, 2014 04:30pm ET MORE Credit: Dr. Robert Ricker, NOAA/NOS/ORR Penguins are torpedo-shaped, flightless birds that live in the southern regions of the Earth. Though many people imagine a small, black-and-white animal when they think of penguins, these birds actually come in a variety of sizes, and some are very colorful. For example, crested penguins sport a crown of yellow feathers. Blushes of orange and yellow mark the necks of emperor and king penguins. What look like bright yellow, bushy eyebrows adorn the heads of some species, such as the Fiordland, royal, Snares and rockhopper penguins. The macaroni penguin's name comes from the crest of yellow feathers on its head, which looks like the 18th-century hats of the same name. A light yellow mask covers the face of the yellow-eyed penguin around the eyes. An Adélie penguin on Penguin Island, which forms part of the South Shetland Islands of Antarctica. Credit: Gemma Clucas According to the Integrated Taxonomic Information System (ITIS), there are 19 species of penguin. (Some experts, however, say the eastern rockhopper is a subspecies of the southern rockhopper.) [ Gallery: Photos of 18 Penguin Species ] The smallest penguin species is the little (also called little blue) penguin. These birds grow to 10 to 12 inches (25.4 to 30.48 centimeters) tall and weigh only 2 to 3 lbs. (0.90 to 1.36 kilograms). The largest penguin is the emperor penguin. It grows to 36 to 44 inches (91.44 to 111.76 cm) tall and weighs 60 to 90 lbs (27.21 to 40.82 kg). Where do penguins live? Considered marine birds, penguins live up to 80 percent of their lives in the ocean, according to the New England Aquarium . All penguins live in the Southern Hemisphere, though it is a common myth that they all live in Antarctica. In fact, penguins can be found on every continent in the Southern Hemisphere. It is also a myth that penguins can only live in cold climates. The Galapagos penguin, for example, lives on tropical islands at the equator. What do penguins eat? Penguins are carnivores; they eat only meat. Their diet includes krill (tiny crustaceans), squid and fish. Some species of penguin can make a large dent in an area's food supply. For example, the breeding population of Adélie penguins (about 2,370,000 pairs) can consume up to 1.5 million metric tons (1.5 billion kg) of krill, 115,000 metric tons (115 million kg) of fish and 3,500 metric tons (3.5 million kg) of squid each year, according to Sea World . The yellow-eyed penguin is very tenacious when foraging for food. It will dive as deep as 120 meters (393.70 feet) up to 200 times a day looking for fish, according to the Yellow-Eyed Penguin Trust . Mating & baby penguins A group of penguins is called a colony, according to the U.S. Geological Survey. During breeding season, penguins come ashore to form huge colonies called rookeries, according to Sea World. Most penguins are monogamous. This means that male and female pairs will mate exclusively with each other for the duration of mating season. In many cases, the male and female will continue to mate with each other for most of their lives. For example, research has found that chinstrap penguins re-paired with the same partner 82 percent of the time and gentoo penguins re-paired 90 percent of the time. At around three to eight years old, a penguin is mature enough to mate. Most species breed during the spring and summer. The male usually starts the mating ritual and will pick out a nice nesting site before he approaches a female. After mating, the female emperor or king penguin will lay a single egg. All other species of penguins lay two eggs. The two parents will take turns holding the eggs between their legs for warmth in a nest. The one exception is the emperor penguin . The female of this species will place the egg on the male's feet to keep warm in his fat folds while she goes out and hunts for several weeks. When penguin chicks are ready to hatch, they use their beaks to break through the shell of their eggs. This process can take up to three days. After the chicks emerge, the parents will take turns feeding their offspring with regurgitated food. Penguin parents can identify their offspring by unique calls that the chick will make. Emperor penguins may migrate to find new nesting grounds. Credit: Michelle LaRue
Penguin
"Which species of decapod has varieties called ""fiddler', 'spider' and 'hermit'?"
How Do Penguins Protect Themselves from Enemies? | Sciencing How Do Penguins Protect Themselves from Enemies? By Jonathan Marker Tom Brakefield/Stockbyte/Getty Images From the Galapagos Islands to Antarctica, penguins encounter threats from predatory birds, marine mammals and sharks. Whether on land or in the sea, penguins employ a number of defense mechanisms against their enemies. These include the sheer number of penguins in a colony, burst swimming speed, high maneuverability in the water and swift exits from the sea. Strength in Numbers The 17 species of penguins persist almost exclusively in the coastal Southern Hemisphere, with ranges including Antarctica, South America, Australia, New Zealand and South Africa. The exception is the Galapagos penguin, which is the only species that lives north of the equator. Regardless of geographic location, the tendency of most penguins to live in large colonies offers the protection of sheer numbers against enemies in the air, on land and under the waves, if only by providing a warning to other penguins that an enemy is nearby. Huddling together also discourages straggling from the colony and denies predators what would otherwise be an easy meal. Penguin Camouflage The distinct black and white coloring of penguins is a type of camouflage called countershading, which helps penguins hide from predators and hunt prey. The countershading observed in penguins generally consists of black feathers distributed atop their heads, backs and flippers to help them blend in with the darkness of the ocean when viewed from above. Their bellies and undersides are colored white to blend in with the bright surface of the ocean when viewed from below. Protection on Land Depending on geographic location, on land penguins generally face the threat of predation by wild dogs, feral cats, rats and predatory birds like Arctic skuas and raptors. Although penguins walk slowly with a distinctive waddle and cannot fly away from danger, they can slide on their bellies in a maneuver called "tobogganing" to flee their foes. When at the ocean's edge, tobogganing can allow penguins to make a quick escape into the water, where they maneuver best. In addition, penguins' ability to live in cold, inhospitable environments is itself a protection against predators. Perhaps most famously, emperor penguins avoid land predators by breeding inland on the Antarctic continent, an environment too hostile for any land predators. Their physical and behavioral adaptations to the cold are evolved precisely for this reason. Protection at Sea Penguins spend most of their lives in water and are exposed to a wide variety of marine predators, including sharks and large marine mammals like orcas and leopard seals. Porpoising is a technique that penguins can use to jump out of the water at high speed; when close to land, this technique can allow the penguin to escape from a marine predator and return to the safety of the colony. Additionally, although penguins can achieve speeds of up to 22 miles per hour, marine predators like orcas are faster. To compensate, penguins use sharp, zigzagging turns to outmaneuver these larger and less agile animals. References
i don't know
Which digestive organ is well-developed in grass-eating herbivores, but is only vestigial in humans?
Anatomy and Physiology of Animals/The Gut and Digestion - Wikibooks, open books for an open world Anatomy and Physiology of Animals/The Gut and Digestion From Wikibooks, open books for an open world original image by vnysia cc by Contents After completing this section, you should know: what is meant by the terms: ingestion, digestion, absorption, assimilation, egestion, peristalsis and chyme the characteristics, advantages and disadvantages of a herbivorous, carnivorous and omnivorous diet the 4 main functions of the gut the parts of the gut in the order in which the food passes down it The Gut And Digestion[ edit ] Plant cells are made of organic molecules using energy from the sun. This process is called photosynthesis. Animals rely on these ready-made organic molecules to supply them with their food. Some animals (herbivores) eat plants; some (carnivores) eat the herbivores. Herbivores[ edit ] Herbivores eat plant material. While no animal produces the digestive enzymes to break down the large cellulose molecules in the plant cell walls, micro-organisms' like bacteria, on the other hand, can break them down. Therefore herbivores employ micro-organisms to do the job for them. There are three types of herbivore: The first, ruminants like cattle, sheep and goats, house these bacteria in a special compartment in the enlarged stomach called the rumen. The second group has an enlarged large intestine and caecum, called a functional caecum, occupied by cellulose digesting micro-organisms. These non-ruminant herbivores include the horse, rabbit and rat. Humans also have a cecum and can be classified as the third type of herbivorous class, along with orangutans and gorillas. Plants are a primary pure and good source of nutrients, however they are digested very easily and therefore herbivores have to eat large quantities of food to obtain all they require. Herbivores like cows, horses and rabbits typically spend much of their day feeding. To give the micro-organisms access to the cellulose molecules, the plant cell walls need to be broken down. This is why herbivores have teeth that are adapted to crush and grind. Their guts also tend to be lengthy and the food takes a long time to pass through it. Eating plants have other advantages. Plants are immobile so herbivores normally have to spend little energy collecting them. This contrasts with another main group of animals - the carnivores that often have to chase their prey. Carnivores[ edit ] Carnivorous animals like those in the cat and dog families, polar bears, seals, crocodiles and birds of prey catch and eat other animals. They often have to use large amounts of energy finding, stalking, catching and killing their prey. However, they are rewarded by the fact that meat provides a very concentrated source of nutrients. Carnivores in the wild therefore tend to eat distinct meals often with long and irregular intervals between them. Time after feeding is spent digesting and absorbing the food. The guts of carnivores are usually shorter and less complex than those of herbivores because meat is easier to digest than plant material. Carnivores usually have teeth that are specialised for dealing with flesh, gristle and bone. They have sleek bodies, strong, sharp claws and keen senses of smell, hearing and sight. They are also often cunning, alert and have an aggressive nature. Omnivores[ edit ] Many animals feed on both animal and vegetable material – they are omnivorous. Most primates are herbivorous but a few, such as chimpanzees and humans, belong to this category as do pigs and rats. Their food is diverse, ranging from plant material to animals they have either killed themselves or scavenged from other carnivores. Omnivores lack the specialised teeth and guts of carnivores and herbivores but are often highly intelligent and adaptable reflecting their varied diet. Treatment Of Food[ edit ] Whether an animal eats plants or flesh, the carbohydrates, fats and proteins in the food it eats are generally giant molecules (see chapter 1). These need to be split up into smaller ones before they can pass into the blood and enter the cells to be used for energy or to make new cell constituents. For example: The Gut[ edit ] The digestive tract, alimentary canal or gut is a hollow tube stretching from the mouth to the anus. It is the organ system concerned with the treatment of foods. At the mouth the large food molecules are taken into the gut - this is called ingestion. They must then be broken down into smaller ones by digestive enzymes - digestion, before they can be taken from the gut into the blood stream - absorption. The cells of the body can then use these small molecules - assimilation. The indigestible waste products are eliminated from the body by the act of egestion (see diagram 11.1). Diagram 11.1 - From ingestion to egestion The 4 major functions of the gut are: 1. Transporting the food; 2. Processing the food physically by breaking it up (chewing), mixing, adding fluid etc. 3. Processing the food chemically by adding digestive enzymes to split large food molecules into smaller ones. 4. Absorbing these small molecules into the blood stream so the body can use them. The regions of a typical mammals gut (for example a cat or dog) are shown in diagram 11.2. Diagram 11.2 - A typical mammalian gut The food that enters the mouth passes to the oesophagus, then to the stomach, small intestine, cecum, large intestine, rectum and finally undigested material exits at the anus. The liver and pancreas produce secretions that aid digestion and the gall bladder stores bile. Herbivores have an appendix which they use for the digestion of cellulose. Carnivores have an appendix but is not of any function anymore due to the fact that their diet is not based on cellulose anymore. Mouth[ edit ] The mouth takes food into the body. The lips hold the food inside the mouth during chewing and allow the baby animal to suck on its mother’s teat. In elephants the lips (and nose) have developed into the trunk which is the main food collecting tool. Some mammals, e.g. hamsters, have stretchy cheek pouches that they use to carry food or material to make their nests. The sight or smell of food and its presence in the mouth stimulates the salivary glands to secrete saliva. There are four pairs of these glands in cats and dogs (see diagram 11.3). The fluid they produce moistens and softens the food making it easier to swallow. It also contains the enzyme, salivary amylase, which starts the digestion of starch. The tongue moves food around the mouth and rolls it into a ball for swallowing. Taste buds are located on the tongue and in dogs and cats it is covered with spiny projections used for grooming and lapping. The cow’s tongue is prehensile and wraps around grass to graze it. Swallowing is a complex reflex involving 25 different muscles. It pushes food into the oesophagus and at the same time a small flap of tissue called the epiglottis closes off the windpipe so food doesn’t go ‘down the wrong way’ and choke the animal (see diagram 11.4). Diagram 11.3 - Salivary glands Teeth[ edit ] Teeth seize, tear and grind food. They are inserted into sockets in the bone and consist of a crown above the gum and root below. The crown is covered with a layer of enamel, the hardest substance in the body. Below this is the dentine, a softer but tough and shock resistant material. At the centre of the tooth is a space filled with pulp which contains blood vessels and nerves. The tooth is cemented into the socket and in most teeth the tip of the root is quite narrow with a small opening for the blood vessels and nerves (see diagram 11.5). In teeth that grow continuously, like the incisors of rodents, the opening remains large and these teeth are called open rooted teeth. Mammals have 2 distinct sets of teeth. The first the milk teeth are replaced by the permanent teeth. Diagram 11.5 - Structure of a tooth Types Of Teeth[ edit ] All the teeth of fish and reptiles are similar but mammals usually have four different types of teeth. The incisors are the chisel-shaped ‘biting off’ teeth at the front of the mouth. In rodents and rabbits the incisors never stop growing (open-rooted teeth). They must be worn or ground down continuously by gnawing. They have hard enamel on one surface only so they wear unevenly and maintain their sharp cutting edge. The largest incisors in the animal kingdom are found in elephants, for tusks are actually giant incisors. Sloths have no incisors at all, and sheep have no incisors in the upper jaw (see diagram 11.6). Instead there is a horny pad against which the bottom incisors cut. The canines or ‘wolf-teeth’ are long, cone-shaped teeth situated just behind the incisors. They are particularly well developed in the dog and cat families where they are used to hold, stab and kill the prey (see diagram 11.7). The tusks of boars and walruses are large canines while rodents and herbivores like sheep have no (or reduced) canines. In these animals the space where the canines would normally be is called the diastema. In rodents like the rat and beaver it allows the debris from gnawing to be expelled easily. The cheek teeth or premolars and molars crush and grind the food. They are particularly well developed in herbivores where they have complex ridges that form broad grinding surfaces (see diagram 11.6). These are created from alternating bands of hard enamel and softer dentine that wear at different rates. In carnivores the premolars and molars slice against each other like scissors and are called carnassial teeth see diagram 11.7). They are used for shearing flesh and bone. Dental Formula[ edit ] The numbers of the different kinds of teeth can be expressed in a dental formula. This gives the numbers of incisors, canines, premolars and molars in one half of the mouth. The numbers of these four types of teeth in the left or right half of the upper jaw are written above a horizontal line and the four types of teeth in the right or left half of the lower jaw are written below it. Thus the dental formula for the sheep is: 0.0.3.3 3.1.3.3 It indicates that in the upper right (or left) half of the jaw there are no incisors or canines (i.e. there is a diastema), three premolars and three molars. In the lower right (or left) half of the jaw are three incisors, one canine, three premolars and three molars (see diagram 11.6). Diagram 11.6 - A sheep’s skull The dental formula for a dog is: 3.1.4.2 3.1.4.3 The formula indicates that in the right (or left) half of the upper jaw there are three incisors, one canine, four premolars and two molars. In the right (or left) half of the lower jaw there are three incisors, one canine, four premolars and three molars (see diagram 11.7). Diagram 11.7 - A dog’s skull Oesophagus[ edit ] The oesophagus transports food to the stomach. Food is moved along the oesophagus, as it is along the small and large intestines, by contraction of the smooth muscles in the walls that push the food along rather like toothpaste along a tube. This movement is called peristalsis (see diagram 11.8). Diagram 11.8 - Peristalsis Stomach[ edit ] The stomach stores and mixes the food. Glands in the wall secrete gastric juice that contains enzymes to digest protein and fats as well as hydrochloric acid to make the contents very acidic. The walls of the stomach are very muscular and churn and mix the food with the gastric juice to form a watery mixture called chyme (pronounced kime). Rings of muscle called sphincters at the entrance and exit to the stomach control the movement of food into and out of it (see diagram 11.9). Diagram 11.9 - The stomach Small Intestine[ edit ] Most of the breakdown of the large food molecules and absorption of the smaller molecules take place in the long and narrow small intestine. The total length varies but it is about 6.5 metres in humans, 21 metres in the horse, 40 metres in the ox and over 150 metres in the blue whale. It is divided into 3 sections: the duodenum (after the stomach), jejunum and ileum. The duodenum receives 3 different secretions: 1) Bile from the liver; 2) Pancreatic juice from the pancreas and 3) Intestinal juice from glands in the intestinal wall. These complete the digestion of starch, fats and protein. The products of digestion are absorbed into the blood and lymphatic system through the wall of the intestine, which is lined with tiny finger-like projections called villi that increase the surface area for more efficient absorption (see diagram 11.10). Diagram 11.10 - The wall of the small intestine showing villi The Rumen[ edit ] In ruminant herbivores like cows, sheep and antelopes the stomach is highly modified to act as a “fermentation vat”. It is divided into four parts. The largest part is called the rumen. In the cow it occupies the entire left half of the abdominal cavity and can hold up to 270 litres. The reticulum is much smaller and has a honeycomb of raised folds on its inner surface. In the camel the reticulum is further modified to store water. The next part is called the omasum with a folded inner surface. Camels have no omasum. The final compartment is called the abomasum. This is the ‘true’ stomach where muscular walls churn the food and gastric juice is secreted (see diagram 11.11). Diagram 11.11 - The rumen Ruminants swallow the grass they graze almost without chewing and it passes down the oesophagus to the rumen and reticulum. Here liquid is added and the muscular walls churn the food. These chambers provide the main fermentation vat of the ruminant stomach. Here bacteria and single-celled animals start to act on the cellulose plant cell walls. These organisms break down the cellulose to smaller molecules that are absorbed to provide the cow or sheep with energy. In the process, the gases methane and carbon dioxide are produced. These cause the “burps” you may hear cows and sheep making. Not only do the micro-organisms break down the cellulose but they also produce the vitamins E, B and K for use by the animal. Their digested bodies provide the ruminant with the majority of its protein requirements. In the wild grazing is a dangerous activity as it exposes the herbivore to predators. They crop the grass as quickly as possible and then when the animal is in a safer place the food in the rumen can be regurgitated to be chewed at the animal’s leisure. This is ‘chewing the cud’ or rumination. The finely ground food may be returned to the rumen for further work by the microorganisms or, if the particles are small enough, it will pass down a special groove in the wall of the oesophagus straight into the omasum. Here the contents are kneaded and water is absorbed before they pass to the abomasum. The abomasum acts as a “proper” stomach and gastric juice is secreted to digest the protein. Large Intestine[ edit ] The large intestine consists of the caecum, colon and rectum. The chyme from the small intestine that enters the colon consists mainly of water and undigested material such as cellulose (fibre or roughage). In omnivores like the pig and humans the main function of the colon is absorption of water to give solid faeces. Bacteria in this part of the gut produce vitamins B and K. The caecum, which forms a dead-end pouch where the small intestine joins the large intestine, is small in pigs and humans and helps water absorption. However, in rabbits, rodents and horses, the caecum is very large and called the functional caecum. It is here that cellulose is digested by micro-organisms. The appendix, a narrow dead end tube at the end of the caecum, is particularly large in primates but seems to have no digestive function. Functional Caecum[ edit ] The caecum in the rabbit, rat and guinea pig is greatly enlarged to provide a “fermentation vat” for micro-organisms to break down the cellulose plant cell walls. This is called a functional caecum (see diagram 11.12). In the horse both the caecum and the colon are enlarged. As in the rumen, the large cellulose molecules are broken down to smaller molecules that can be absorbed. However, the position of the functional caecum after the main areas of digestion and absorption, means it is potentially less effective than the rumen. This means that the small molecules that are produced there can not be absorbed by the gut but pass out in the faeces. The rabbit and rodents (and foals) solve this problem by eating their own faeces so that they pass through the gut a second time and the products of cellulose digestion can be absorbed in the small intestine. Rabbits produce two kinds of faeces. Softer night-time faeces are eaten directly from the anus and the harder pellets you are probably familiar with, that have passed through the gut twice. Diagram 11.12 - The gut of a rabbit The Gut Of Birds[ edit ] Birds’ guts have important differences from mammals’ guts. Most obviously, birds have a beak instead of teeth. Beaks are much lighter than teeth and are an adaptation for flight. Imagine a bird trying to take off and fly with a whole set of teeth in its head! At the base of the oesophagus birds have a bag-like structure called a crop. In many birds the crop stores food before it enters the stomach, while in pigeons and doves glands in the crop secretes a special fluid called crop-milk which parent birds regurgitate to feed their young. The stomach is also modified and consists of two compartments. The first is the true stomach with muscular walls and enzyme secreting glands. The second compartment is the gizzard. In seed eating birds this has very muscular walls and contains pebbles swallowed by the bird to help grind the food. This is the reason why you must always supply a caged bird with grit. In birds of prey like the falcon the walls of the gizzard are much thinner and expand to accommodate large meals (see diagram 11.13). Diagram 11.13 - The stomach and small intestine of a hen Digestion[ edit ] During digestion the large food molecules are broken down into smaller molecules by enzymes. The three most important groups of enzymes secreted into the gut are: Amylases that split carbohydrates like starch and glycogen into monosaccharides like glucose. Proteases that split proteins into amino acids. Lipases that split lipids or fats into fatty acids and glycerol. Glands produce various secretions which mix with the food as it passes along the gut. These secretions include: Saliva secreted into the mouth from several pairs of salivary glands (see diagram 11.3). Saliva consists mainly of water but contains salts, mucous and salivary amylase. The function of saliva is to lubricate food as it is chewed and swallowed and salivary amylase begins the digestion of starch. Gastric juice secreted into the stomach from glands in its walls. Gastric juice contains pepsin that breaks down protein and hydrochloric acid to produce the acidic conditions under which this enzyme works best. In baby animals rennin to digest milk is also produced in the stomach. Bile produced by the liver. It is stored in the gall bladder and secreted into the duodenum via the bile duct (see diagram 11.14). (Note that the horse, deer, parrot and rat have no gall bladder). Bile is not a digestive enzyme. Its function is to break up large globules of fat into smaller ones so the fat splitting enzymes can gain access the fat molecules. Diagram 11.14 - The liver, gall bladder and pancreas Pancreatic juice[ edit ] The pancreas is a gland located near the beginning of the duodenum (see diagram 11.14). In most animals it is large and easily seen but in rodents and rabbits it lies within the membrane linking the loops of the intestine (the mesentery) and is quite difficult to find. Pancreatic juice is produced in the pancreas. It flows into the duodenum and contains amylase for digesting starch, lipase for digesting fats and protease for digesting proteins. Intestinal juice[ edit ] Intestinal juice is produced by glands in the lining of the small intestine. It contains enzymes for digesting disaccharides and proteins as well as mucus and salts to make the contents of the small intestine more alkaline so the enzymes can work. Absorption[ edit ] The small molecules produced by digestion are absorbed into the villi of the wall of the small intestine. The tiny finger-like projections of the villi increase the surface area for absorption. Glucose and amino acids pass directly through the wall into the blood stream by diffusion or active transport. Fatty acids and glycerol enter vessels of the lymphatic system (lacteals) that run up the centre of each villus. The Liver[ edit ] The liver is situated in the abdominal cavity adjacent to the diaphragm (see diagrams 2 and 14). It is the largest single organ of the body and has over 100 known functions. Its most important digestive functions are: the production of bile to help the digestion of fats (described above) and the control of blood sugar levels Glucose is absorbed into the capillaries of the villi of the intestine. The blood stream takes it directly to the liver via a blood vessel known as the hepatic portal vessel or vein (see diagram 11.15). The liver converts this glucose into glycogen which it stores. When glucose levels are low the liver can convert the glycogen back into glucose. It releases this back into the blood to keep the level of glucose constant. The hormone insulin, produced by special cells in the pancreas, controls this process. Diagram 11.15 - The control of blood glucose by the liver Other functions of the liver include: 3. making vitamin A, 4. making the proteins that are found in the blood plasma (albumin, globulin and fibrinogen), 5. storing iron, 6. removing toxic substances like alcohol and poisons from the blood and converting them to safer substances, 7. producing heat to help maintain the temperature of the body. Diagram 11.16 - Summary of the main functions of the different regions of the gut Summary[ edit ] The gut breaks down plant and animal materials into nutrients that can be used by animals’ bodies. Plant material is more difficult to break down than animal tissue. The gut of herbivores is therefore longer and more complex than that of carnivores. Herbivores usually have a compartment (the rumen or functional caecum) housing micro-organisms to break down the cellulose wall of plants. Chewing by the teeth begins the food processing. There are 4 main types of teeth: incisors, canines, premolars and molars. In dogs and cats the premolars and molars are adapted to slice against each other and are called carnassial teeth. Saliva is secreted in the mouth. It lubricates the food for swallowing and contains an enzyme to break down starch. Chewed food is swallowed and passes down the oesophagus by waves of contraction of the wall called peristalsis. The food passes to the stomach where it is churned and mixed with acidic gastric juice that begins the digestion of protein. The resulting chyme passes down the small intestine where enzymes that digest fats, proteins and carbohydrates are secreted. Bile produced by the liver is also secreted here. It helps in the breakdown of fats. Villi provide the large surface area necessary for the absorption of the products of digestion. In the colon and caecum water is absorbed and micro organisms produce some vitamin B and K. In rabbits, horses and rodents the caecum is enlarged as a functional caecum and micro-organisms break down cellulose cell walls to simpler carbohydrates. Waste products exit the body via the rectum and anus. The pancreas produces pancreatic juice that contains many of the enzymes secreted into the small intestine. In addition to producing bile the liver regulates blood sugar levels by converting glucose absorbed by the villi into glycogen and storing it. The liver also removes toxic substances from the blood, stores iron, makes vitamin A and produces heat. Worksheet[ edit ] Use the Digestive System Worksheet to help you learn the different parts of the digestive system and their functions. Test Yourself[ edit ] Then work through the Test Yourself below to see if you have understood and remembered what you learned. 1. Name the four different kinds of teeth 2. Give 2 facts about how the teeth of cats and dogs are adapted for a carnivorous diet: 1. 3. What does saliva do to the food? 4. What is peristalsis? 5. What happens to the food in the stomach? 6. What is chyme? 7. Where does the chyme go after leaving the stomach? 8. What are villi and what do they do? 9. What happens in the small intestine? 10. Where is the pancreas and what does it do? 11. How does the caecum of rabbits differ from that of cats? 12. How does the liver help control the glucose levels in the blood? 13. Give 2 other functions of the liver: 1.
Appendix
Which are the only birds able to fly backwards?
Vestigiality of the human appendix References Introduction Many biological structures can be considered vestiges given our current evolutionary knowledge of comparative anatomy and phylogenetics. In evolutionary discussions the human vermiform appendix is one of the most commonly cited vestigial structures, and one of the most disputed. Evolutionary vestiges are, technically, any diminished structure that previously had a greater physiological significance in an ancestor than at present. Independently of evolutionary theory, a vestige can also be defined typologically as a reduced and rudimentary structure compared to the same homologous structure in other organisms, as one that lacks the complex functions usually found for that structure in other organisms (see, e.g. Geoffroy 1798 ). Classic examples of vestiges are the wings of the ostrich and the eyes of blind cavefish. These vestigial structures may have functions of some sort. Nevertheless, what matters is that rudimentary ostrich wings are useless as normal flying wings, and that rudimentary cavefish eyes are useless as normal sighted eyes. Vestiges can be functional, and speculative arguments against vestiges based upon their possible functions completely miss the point. For more discussion of the vestigial concept, extensive modern and historical references concerning its definition (especially the allowance for functionality), see the Citing Scadding (1981) and Misunderstanding Vestigiality and 29+ Evidences for Macroevolution: Anatomical vestiges FAQs. The following discussion makes four main points: The human appendix may have bona fide functions, but this is currently controversial, undemonstrated in humans, and irrelevant as to whether the appendix is a true vestige or not. The appendix is a prime example of dysteleology (i.e. suboptimal structural design), a prediction of genetically gradual evolution . The appendix is a rudimentary tip of the caecum and is useless as a normal, cellulose-digesting caecum. Thus, the appendix is vestigial by both the evolutionary and non-evolutionary, typological definitions of vestigiality. The vermiform appendix: background info Figure 1: The human vermiform appendix (image reproduced with modifications from Gray 1918 ) In humans, the vermiform appendix is a small, finger-sized structure, found at the end of our small caecum and located near the beginning of the large intestine ( Fawcett and Raviola 1994 , p. 636; Oxford Companion to the Body 2001 , pp. 42-43; Williams and Myers 1994 ). The adjective "vermiform" literally means "worm-like" and reflects the narrow, elongated shape of this intestinal appendage. The appendix is typically between two and eight inches long, but its length can vary from less than an inch (when present) to over a foot. The appendix is longest in childhood and gradually shrinks throughout adult life. The wall of the appendix is composed of all layers typical of the intestine, but it is thickened and contains a concentration of lymphoid tissue. Similar to the tonsils, the lymphatic tissue in the appendix is typically in a constant state of chronic inflammation, and it is generally difficult to tell the difference between pathological disease and the "normal" condition ( Fawcett and Raviola 1994 , p. 636). The internal diameter of the appendix, when open, has been compared to the size of a matchstick. The small opening to the appendix eventually closes in most people by middle age. A vermiform appendix is not unique to humans. It is found in all the hominoid apes, including humans, chimpanzees, gorillas, orangutans, and gibbons, and it exists to varying degrees in several species of New World and Old World monkeys ( Fisher 2000 ; Hill 1974 ; Scott 1980 ). The caecum: a specialized herbivorous organ Our appendix is a developmental derivative and evolutionary vestige of the end of the much larger herbivorous caecum found in our primate ancestors ( Condon and Telford 1991 ; Williams and Myers 1994 , p. 9). The word "caecum" actually means "blind" in Latin, reflecting the fact that the bottom of the caecum is a blind pouch (a dead-end or cul-de-sac). In most vertebrates, the caecum is a large, complex gastrointestinal organ, enriched in mucosal lymphatic tissue ( Berry 1900 ), and specialized for digestion of plants (see Figure 2 ; Kardong 2002 , pp. 510-515). The caecum varies in size among species, but in general the size of the caecum is proportional to the amount of plant matter in a given organism's diet. It is largest in obligate herbivores, animals whose diets consist entirely of plant matter. In many herbivorous mammals the caecum is as large as the rest of the intestines, and it may even be coiled and longer than the length of the entire organism (as in the koala). In herbivorous mammals, the caecum is essential for digestion of cellulose, a common plant molecule. The caecum houses specialized, symbiotic bacteria that secrete cellulase, an enzyme that digests cellulose. Otherwise cellulose is impossible for mammals to digest. The structure of the caecum is specialized to increase the efficiency of cellulose fermentation. As a "side branch" from the gut it is able to house a large, dense, and permanent colony of specialized bacteria. Being a dead-end sac at the beginning of the large intestine, it allows more time for digesting food to reside in the gut and ferment more completely, before passing through the large intestine where the resulting nutrients are absorbed. However, even though humans are herbivorous, the small human caecum does not house significant quantities of cellulase-excreting bacteria, and we cannot digest more than but a few grams of cellulose per day ( Slavin, Brower, and Marlett 1980 ). Figure 2: Gastrointestinal tracts of various mammals. For each species, the stomach is shown at top, the small intestine at left, the caecum and associated appendix (if present) in magenta, and the large intestine at bottom right. Scale differs between species. Reproduced with modifications from Kardong 2002 , p. 511. Copyright © 2002 McGraw-Hill. The human appendix is homologous to the end of the mammalian caecum In vertebrate comparative anatomy, it has long been known that the human appendix and the end of the mammalian caecum are structurally homologous ( Berry 1900 ; Fisher 2000 ; Hill 1974 ; Hyman 1979 , p. 412; Kardong 2002 , pp. 513-515; Kluge 1977 , p. 1977; Neal and Rand 1936 , p. 315; Romer and Parsons 1986 , p. 389; Royster 1927 , p. 27; Smith 1960 , p. 305; Weichert 1967 , p. 189; Wiedersheim 1886 , p. 236; Wolff 1991 , p. 384). Of course, the end of the caecum and the appendix can be homologous and have different functions. Being the termination of the caecum, the human vermiform appendix is also a "blind pouch," and another name for the appendix is in fact the "true caecal apex" ( Berry 1900 ). Within the gastrointestinal tract of many vertebrates, mammals, and primates in particular, the termination of the caecum and the vermiform appendix share the same relative position ( Figure 2 ), both have a similar structure and form, both are blind sacs enriched with lymphatic tissue ( Berry 1900 ), both have a common developmental origin ( Condon and Telford 1991 ; Williams and Myers 1994 , p. 9), and, as discussed below, in the primates both are connected by an extensive series of intermediates. These observations firmly establish these structures as homologous by standard systematic criteria ( Kitching et al. 1998 , pp. 26-27; Remane 1952 ; Schuh 2000 , pp. 63-64; Rieppel 1988 , p. 202), a conclusion confirmed by cladistic systematic analysis ( Goodman et al. 1998 ; Shoshani 1996 ). A few other mammals appear to have a structure similar to the hominoid vermiform appendix, including the wombat, South American opossum (both marsupials), some rodents, and the rabbit. However, extensive comparative analysis has shown that the caecal appendixes of humans and these other mammals were derived from the caecum independently; these structures are not homologous as appendixes ( Shoshani and McKenna 1998 ). The relationship between these other caecal structures and the hominoid vermiform appendix is similar to the homology of bat and bird wings. The wings of bats and birds are homologous as modified forelimbs, yet they are not homologous as wings. Likewise, the appendixes of rabbits and humans are homologous as modified caeca, yet they are not homologous as appendixes. If the rabbit and human appendix have similar functions (which has never been experimentally demonstrated), they are the result of independent convergence of function and form ( Shoshani and McKenna 1998 ). The many significant morphological, histological, and cellular differences between the rabbit and human appendixes (discussed below) all are consistent with a superficial convergent relationship. Structural intermediates of the appendix and caecum in primate phylogeny The primate family tree provides a rather complete set of intermediates between the states "large caecum/appendix absent" to "small caecum/appendix present". Many primates have both a caecum and an appendix, or a structure intermediate between the two. The anatomical definition of a vermiform appendix is a narrowed, thickened, lymphoid-rich caecal apex ( Fisher 2000 and references therein). As already mentioned, the hominoid apes all bear a vermiform appendix, but many non-anthropoid primates also have structures that fit the above definition to varying degrees. In fact, recent reevaluation of the anatomy of the primate caecum and appendix has highlighted the difficulties in determining exactly where the caecum ends and the appendix begins in different species ( Fisher 2000 ). This complication arises from the continuous, variable, and overlapping nature of caecal and appendicular tissues, both histologically and anatomically. For example, in most primates the caecal apex is rich in lymphoid tissue and is thickened, but whether it is narrowed into a conical or "worm-like" structure is variable ( Fisher 2000 ). From systematic analysis of comparative anatomy, it is known that in primates a large caecum with a small or absent appendix is the ancestral, primitive state ( Goodman et al. 1998 ; Shoshani 1996 ). In general, the length of the caecum, relative to that of the colon, decreases as one traverses the primate phylogenetic tree from monkeys to humans. Concurrently, the size of the appendix increases. The appendix is mostly absent in prosimians and New World monkeys, yet they have a large caecum. In Old World monkeys the appendix is more recognizable, and it is well-developed in the anthropoid apes, which lack the large cellulose-fermenting caecum found in their ancestors and other primates ( Fisher 2000 ; Goodman et al. 1998 ; Hill 1974 ; Shoshani 1996 ; Scott 1980 ). Possible function of the appendix "The appendix n, of the colon n m, is a part of the caecum and is capable of contracting and dilating so that excessive wind does not rupture the caecum." Leonardo da Vinci Earliest known drawing of the appendix, by Leonardo da Vinci. Throughout medical history many possible functions for the appendix have been offered, examined, and refuted, including exocrine, endocrine, and neuromuscular functions ( Williams and Myers 1994 , pp. 28-29). Today, a growing consensus of medical specialists holds that the most likely candidate for the function of the human appendix is as a part of the gastrointestinal immune system. Several reasonable arguments exist for suspecting that the appendix may have a function in immunity. Like the rest of the caecum in humans and other primates, the appendix is highly vascular, is lymphoid-rich, and produces immune system cells normally involved with the gut-associated lymphoid tissue (GALT) ( Fisher 2000 ; Nagler-Anderson 2001 ; Neiburger et al. 1976 ; Somekh et al. 2000 ; Spencer et al. 1985 ). Animal models, such as the rabbit and mouse, indicate that the appendix is involved in mammalian mucosal immune function, particularly the B and T lymphocyte immune response ( Craig and Cebra 1975 ). Animal studies provide limited evidence that the appendix may function in proper development of the immune system in young juveniles ( Dasso and Howell 1997 ; Dasso et al. 2000 ; Pospisil and Mage 1998 ). However, contrary to what one is apt to read in anti-evolutionary literature, there is currently no evidence demonstrating that the appendix, as a separate organ, has a specific immune function in humans ( Judge and Lichtenstein 2001 ; Dasso et al. 2000 ; Williams and Myers 1994 , pp. 5, 26-29). To date, all experimental studies of the function of an appendix (other than routine human appendectomies) have been exclusively in rabbits and, to a lesser extent, rodents. Currently it is unclear whether the lymphoid tissue in the human appendix performs any specialized function apart from the much larger amount of lymphatic tissue already distributed throughout the gut. Most importantly with regard to vestigiality, there is no evidence from any mammal suggesting that the hominoid vermiform appendix performs functions above and beyond those of the lymphoid-rich caeca of other primates and mammals that lack distinct appendixes. As mentioned above, important differences exist in nearly all respects between the human and rabbit appendixes ( Dasso et al. 2000 ; Williams and Myers 1994 , p. 57). The rabbit appendix, for instance, is very difficult to identify as separate from the rest of its voluminous caecum (see Figure 2 ). Unlike the human appendix, the rabbit's appendix is extremely large, relative to the colon, and is the seat of extensive cellulose degradation due to a specialized microflora. The large rabbit appendix houses half of its GALT lymphoid tissue, whereas the contribution of the human appendix to GALT is significantly less ( Dasso et al. 2000 ). In humans the vast majority of GALT tissue is found in hundreds of Peyer's patches coating the small intestine and in nearly 10,000 similar patches found in the large intestine. Additionally, there are important differences in lymphoid follicular structure, in T-cell distribution, and in immunoglobulin density ( Dasso et al. 2000 ). Furthermore, from systematic analysis we know that the rabbit, rodent, and human appendixes are convergent as outgrowths and constrictions of the caecum ( Shoshani and McKenna 1998 ). It is thus very questionable to conclude from these animal studies that the human appendix has the same function as the other non-primate appendixes. Of course, over a century of medical evidence has firmly shown that the removal of the human appendix after infancy has no obvious ill effects (apart from surgical complications, Williams and Myers 1994 ). Earlier reports of an association between appendectomy and certain types of cancer were artifactual ( Andersen and Isager 1978 ; Gledovic and Radovanovic 1991 ; Mellemkjaer et al. 1998 ). In fact, congenital absence of the appendix also appears to have no discernable effect. From investigative laparoscopies for suspected appendicitis, many people have been found who completely lack an appendix from birth, apparently without any physiological detriment ( Anyanwu 1994 ; Chevre et al. 2000 ; Collins 1955 ; Hei 2003 ; Host et al. 1972 ; Iuchtman 1993 ; Kalyshev et al. 1995 ; Manoil 1957 ; Pester 1965 ; Piquet et al. 1986 ; Ponomarenko and Novikova 1978 ; Rolff et al. 1992 ; Saave 1955 ; Shperber 1983 ; Tilson and Touloukian 1972 ; Williams and Myers 1994 , p. 22). In sum, an enormous amount of medical research has centered on the human appendix, but to date the specific function of the appendix, if any, is still unclear and controversial in human physiology ( Williams and Myers 1994 , pp. 5, 26-29). The appendix is suboptimally designed The human appendix is notorious for the life-threatening complications it can cause. Deadly infection of the appendix at a young age is common, and the lifetime risk of acute appendicitis is 7% ( Addiss et al. 1990 ; Hardin 1999 ; Korner et al. 1997 ; Pieper and Kager 1982 ). The most common age for acute appendicitis is in prepubescent children, between 8 and 13 years of age. Before modern 20th-century surgical techniques were available, a case of acute appendicitis was usually fatal. Even today, appendicitis fatalities are significant ( Blomqvist et al. 2001 ; Luckmann 1989 ). The small entrance to this dead-end pocket makes the appendix difficult to clean out and prone to physical blockage, which ultimately is the cause of appendicitis ( Liu and McFadden 1997 ). This peculiar structural layout is quite beneficial for a larger cellulose-fermenting caecum, but it is unclear why gut lymphoid tissue would need to be housed in a remote, dead-end tube with negligible surface area. In fact, 60% of appendicitis cases are due to lymphoid hyperplasia leading to occlusion of the interior of the appendix, indicating that the appendix is unusually prone to abnormal proliferation of its lymphoid tissue ( Liu and McFadden 1997 ). Such an occurrence would be much less problematic if the interior of the appendix were not so small, confined, and inaccessible from the rest of the gut. In many other primates and mammals, the GALT lymphoid tissue appears to function without difficulty in a much more open, bulbous caecum with ample surface area. Furthermore, there is mounting evidence that removing the appendix helps prevent ulcerative colitis, a nasty inflammatory disease of the colon ( Andersson et al. 2001 ; Buergel et al. 2002 ; Judge and Lichtenstein 2001 ; Koutroubakis and Vlachonikolis 2000 ; Koutroubakis et al. 2002 ; Naganuma 2001 ; Rutgeerts 1994 ). This evidence suggests that the appendix is actually maladaptive, and that the lymphoid tissue contained in the appendix is prone to chronic pathological inflammatory states. If the appendix does have an important function that we have yet to find, it is a leading candidate for the worst designed organ in the human body. How nice if the appendix would just degenerate away after it is no longer needed, so it could never get infected and kill us needlessly. Any biological structure that supposedly ensures our livelihood by its functions, yet paradoxically and unnecessarily kills a large fraction of its bearers prematurely, is poorly designed indeed. Why do some medical sources question the vestigiality of the appendix? The reasons for this are multiple, but they largely stem from the simple fact that most physicians are not trained in evolutionary biology. The erroneous "completely nonfunctional" definition of a vestige is primarily found in medical papers, textbooks, and dictionaries (e.g. Williams and Myers 1994 , p. ix). Using this incorrect and nonevolutionary definition, it is logical to conclude that a structure is not vestigial if its function is discovered. For instance, based upon this incorrect definition, Williams and Myers 1994 incorrectly argue that an evolutionary vestige cannot be both a complex and a "regressive" structure (p. 27). Similarly, a modern version of Gray's Anatomy confusingly implies that the appendix cannot be both vestigial and specialized ( Williams and Warwick 1980 ). However, vestiges are very often complex or specialized structures, in fact overly complex for their functions, and prime examples are the wing of the ostrich and the eyes of blind cavefish. A vestige can be a complex structure, in an absolute sense, while simultaneously being rudimentary or degenerate relative to the same homologous structure in other organisms. Perhaps most important is the fact that a vestige can be identified only via comparative analysis. Physicians are experts on human anatomy and physiology, but rarely do discussions in medical publications consider phylogenetic and comparative issues. Medical articles that attempt to consider phylogenetics often provide a gross misconception of evolutionary fundamentals. For instance, the most thorough and in depth source on the physiology of the human appendix, Williams and Myers 1994 , refers to how the appendix changes as "the primate scale is ascended" and to the "evolutionary scale" with humans at its end (pp. 26-27). These are long-refuted orthogenetic concepts which have been contradicted by basic evolutionary theory since Darwin. Scott 1980 similarly argues against vestigiality based upon orthogenetic concepts and a belief in evolutionary "progress." Fisher 2000 (p. 229) and Scott 1980 both incorrectly imply that a vestige cannot be a derived character, a curious assessment since a vestige must be a phylogenetically derived character by definition. How would we know if the appendix were not vestigial? Whether the appendix has a function of some sort or not has no direct bearing on whether it is a bona fide vestige. However, at least three possible observations would help negate the conclusion that the human appendix is vestigial, using either the evolutionary or the typological definitions of vestigiality: (1) if the human appendix were actually as large and developed as, say, the caecum of a prosimian or New World monkey; (2) if the human appendix contributed significantly to cellulose fermentation and contained a large amount of cellulose-digesting bacteria; (3) if we could demonstrate via phylogenetic or systematic methods that the apex of the cellulose-fermenting caecum in other primates and the vermiform appendix were not structurally homologous as side branches from the intestine. An additional possible observation would contradict the conclusion of vestigiality by the evolutionary definition: (4) if phylogenetic methods indicated that no predicted ancestors of humans ever had a large, cellulose-fermenting caecum (i.e., that a large, cellulose-digesting caecum is actually a derived primate character, not a primitive one). All four of these potential observations are demonstrably false. Additionally, each is based upon positive scientific evidence: (1) we can measure and quantitate the size of the appendix; (2) we can measure and quantitate the amount of cellulose digestion occurring in the appendix; (3) we can observe and compare the relative positions, underlying structures, forms, and development of the organs in the gastrointestinal tracts of various organisms; and (4) we can determine primitive and derived characters by independent phylogenetic analysis. Therefore, the conclusion of vestigiality is susceptible and open to scientific testing against empirical observation. As such the concept of vestigiality is not an "argument from ignorance." It is clearly scientific in nature, based completely upon positive evidence. Conclusion: The vermiform appendix is vestigial Currently, arguments against the vestigiality of the human vermiform appendix have been based upon misunderstandings of what constitutes a vestige and of how vestiges are identified. From an evolutionary perspective, the human appendix is a derivative of the end of the phylogenetically primitive herbivorous caecum found in our primate ancestors ( Goodman et al. 1998 ; Shoshani 1996 ). The human appendix has lost a major and previously essential function, namely cellulose digestion. Though during primate evolution it has decreased in size to a mere rudiment, the appendix retains a structure that was originally specifically adapted for housing bacteria and extending the time course of digestion. For these reasons the human vermiform appendix is vestigial, regardless of whether or not the human appendix functions in the development of the immune system. From a nonevolutionary, typological perspective, the human appendix is homologous to the end of the physiologically important, large, cellulose-fermenting caeca of other mammals. Even though humans eat cellulose, the contribution to cellulose digestion by both the human caecum and its associated appendix is negligible. Regardless of whether one accepts evolutionary theory or not, the human appendix is a rudiment of the caecum that is useless as a normal mammalian, cellulose-digesting caecum. Thus, by all accounts the vermiform appendix remains a valid and classic example of a human vestige. Acknowledgements Thanks to Colin Groves for helpful discussion on the comparative anatomy and homology of the primate appendix and caecal apex, and to John Harshman for insightful comments and clarification of levels of homology. References General Condon, R. E., and Telford, G. L. (1991) "Appendicitis." In: Sabiston Textbook of Surgery: The Biological Basis of Modern Surgical Practice. Townsend, C.M., editor. Fourteenth edition. W. B. Saunders and Co.: Philadelphia, PA. pp. 884-898. Fawcett, D. W. and Raviola, E. (1994) Bloom and Fawcett: A textbook of histology Chapman and Hall: New York, NY. Gray, H. (1918) Anatomy of the human body. Lea and Febige: Philadelphia, PA. Liu, C. D., and McFadden, D. W. (1997) "Acute abdomen and appendix." In: Surgery: Scientific Principles and Practice. Greenfield, L.J., and Mulholland, M. W., editors. Second edition. Williams and Wilkins: Baltimore, MD. pp. 1246-1261. McCabe, J. (1912) The story of evolution. Small Maynard and co.: Boston. [ Gutenberg text ] The Oxford companion to the body. (2001) editors, Colin Blakemore and Sheila Jennett. Oxford University Press: New York, NY. Royster, H. A. (1927) Appendicitis. Appleton and Company: New York. Williams, R. A. and Myers, P. (1994) Pathology of the appendix. Chapman and Hall Medical: New York, NY. Williams, P. L. and Warwick, R. (1980). Gray's Anatomy. Thirty-sixth edition. Churchill Livingstone: New York, NY. Epidemiology Addiss D. G., Shaffer N., Fowler B. S., and Tauxe R. V. (1990) "The epidemiology of appendicitis and appendectomy in the United States." Am J Epidemiol 132 :910-925. [PubMed] Blomqvist, P. G., Andersson, R. E., Granath, F., Lambe, M. P., and Ekbom, A. R. (2001) "Mortality after appendectomy in Sweden, 1987-1996." Ann Surg. 233: 455-460. [PubMed] Hardin D. M. (1999) "Acute appendicitis: review and update." Am Fam Physician 60: 2027-2034. [PubMed] Korner H., Sondenaa K., Soreide J. A., Andersen E., Nysted A., Lende T. H., and Kjellevold K. H. (1997) "Incidence of acute nonperforated and perforated appendicitis: age-specific and sex-specific analysis." World J Surg 21: 313-317. [PubMed] Luckmann R. "Incidence and case fatality rates for acute appendicitis in California. A population-based study of the effects of age." (1989) Am J Epidemiol. 129: 905-918. Pieper R., and Kager L. (1982) "The incidence of acute appendicitis and appendectomy. An epidemiological study of 971 cases." Acta Chir Scand 148: 45-49. [PubMed] Possible Functions Bjerke, K., Brandtzaeg, P., and Rognum, T. O. (1986) "Distribution of immunoglobulin producing cells is different in normal human appendix and colon mucosa." Gut. 27: 667-674. [PubMed] Craig, S. W. and Cebra, J. J. (1975) "Rabbit Peyer's patches, appendix, and popliteal lymph node B lymphocytes: a comparative analysis of their membrane immunoglobulin components and plasma cell precursor potential." J Immunol. 114: 492-502. [PubMed] Dasso, J. F., and Howell, M. D. (1997) "Neonatal appendectomy impairs mucosal immunity in rabbits." Cell Immunol 182: 29-37. [PubMed] Dasso, J. F., Obiakor, H., Bach, H., Anderson, A. O., and Mage, R. G. (2000) "A morphological and immunohistological study of the human and rabbit appendix for comparison with the avian bursa." Dev Comp Immunol 24: 797-814. [PubMed] Nagler-Anderson, C. (2001) "Man the barrier! Strategic defences in the intestinal mucosa." Nat Rev Immunol. 1: 59-67. [PubMed] Neiburger, J. B., Neiburger, R. G., Richardson, S. T., Grosfeld, J. L., and Baehner, R. L. (1976) "Distribution of T and B lymphocytes in lymphoid tissue of infants and children." Infect Immun 14: 118-121. [PubMed] O'Mally, C. D. and Saunders, J. B. de C. M. (1952) Leondardo da Vinci on the human body: the anatomical, physiological, and embryological drawings of Leonardo da Vinci. Ganis and Harris: New York, NY. Pospisil, R., and Mage, R.G. (1998) "Rabbit appendix: a site of development and selection of the B cell repertoire." Curr Top Microbiol Immunol. 229:59-70. [PubMed] Slavin, J.L., Brauer, P.M., and Marlett, J.A. (1980) "Neutral detergent fiber, hemicellulose and cellulose digestibility in human subjects." J Nutr 111(2):287-297. [PubMed] Somekh, E., Serour, F., Gorenstein, A., Vohl, M., and Lehman, D. (2000) "Phenotypic pattern of B cells in the appendix: reduced intensity of CD19 expression." Immunobiology 201: 461-469. [PubMed] Spencer, J., Finn, T., and Isaacson, P. G. (1985) "Gut associated lymphoid tissue: a morphological and immunocytochemical study of the human appendix." Gut 26: 672-679. [PubMed] Taxonomy and Systematics Geoffroy St. Hilaire (1798) "Observations sur l'aile de l'Autruche, par le citoyen Geoffroy." in La Decade Egyptienne, Journal Litteraire et D'Economie Politique. Premier Volume. Au Kaire, de L'Impreimerie Nationale. pp. 46-51 Goodman, M., Porter, C. A., Czelusniak, J., Page, S. L., Schneider, H., Shoshani, J., Gunnell, G., and Groves, C. P. (1998) "Toward a phylogenetic classification of Primates based on DNA evidence complemented by fossil evidence." Mol Phylogenet Evol. 9:585-598. [PubMed] Kitching, I. J., Forey, P. L., Humphries, C. J., and Williams, D. M. (1998) Cladistics: the theory and practice of parsimony analysis. Second edition. Oxford University Press: New York, NY. Remane, A. (1952) Die Grundlagen des Naturlichen Systems der Vergleichenden Anatomie und der Phylogenetik. Geest und Portig K.G.: Leipzig, Germany. Rieppel, O. (1988) Fundamentals of comparative biology. Birkhäuser Verlag: Boston. Schuh, R. T. (2000) Biological systematics: principles and applications. Cornell University Press: Ithaca, NY. Shoshani, J., Groves, C. P., Simons, E. L., and Gunnell, G. F. (1996) "Primate phylogeny: morphological vs. molecular results." Mol Phylogenet Evol. 5: 102-154. [PubMed] Shoshani, J., and McKenna, M. C. (1998) "Higher taxonomic relationships among extant mammals based on morphology, with selected comparisons of results from molecular data." Mol Phylogenet Evol.9: 572-584. [PubMed] Comparative Anatomy of the Appendix Berry, R. J. A. (1900) "The true caecal apex, or the vermiform appendix: Its minute and comparative anatomy." J Anat Physiol 35: 83. Fisher, R. E. (2000) "The primate appendix: a reassessment." Anat Rec. 261: 228-236. [PubMed] Hill, W. C. Osman (1953-1974) Primates, comparative anatomy and taxonomy. Interscience Publishers: New York, NY. Hyman, L. H. (1979) Hyman's Comparative vertebrate anatomy. Marvalee H. Wake, editor. Third edition. University of Chicago Press: Chicago, IL. Kardong, K.V. (2002) Vertebrates: Comparative anatomy, function, evolution. Third edition. McGraw-Hill: New York, NY. Kluge, A. G. (1977) Chordate structure and function. Macmillan: New York, NY. Neal, H. V. and Rand, H. W. (1936) Comparative anatomy. P. Blakiston's Son and Co.: Philadelphia, PA. Romer, A. S. and Parsons T. S. (1986) The vertebrate body. Sixth edition. Saunders College Pub.: Philadelphia, PA. Scott, G. B. (1980) "The primate caecum and appendix vermiformis: a comparative study." J Anat 131: 549-563. [PubMed] Smith, H. M. (1960) Evolution of chordate structure; an introduction to comparative anatomy. Holt, Rinehart and Winston: New York, NY. Weichert, C. K. (1967) Elements of chordate anatomy. Third edition. McGraw-Hill: New York, NY. Wiedersheim, R. (1886) Elements of the comparative anatomy of vertebrates. translated by W. Newton Parker. Macmillan: New York, NY. Wolff, R. G. (1991) Functional chordate anatomy. D.C. Heath: Lexington, MA. Prevention of Ulcerative Colitis Andersson, R. E., Olaison, G., Tysk, C., and Ekbom, A. (2001) "Appendectomy and protection against ulcerative colitis." N Engl J Med 344: 808-814. [PubMed] Buergel, N., Schulzke, J. D., and Zeitz, M. (2002) "Appendectomy reduces the risk of development of ulcerative colitis." Chirurg 73: 805-808. [PubMed] Judge, T., and Lichtenstein, G. R. (2001) "Is the appendix a vestigial organ? Its role in ulcerative colitis." Gastroenterology. 121: 73. [PubMed] Koutroubakis, I. E., and Vlachonikolis, I. G. (2000) "Appendectomy and the development of ulcerative colitis: results of a metaanalysis of published case-control studies." Am J Gastroenterol 95: 171-176. [PubMed] Koutroubakis, I. E., Vlachonikolis I. G., and Kouroumalis, E. A. (2002) "Role of appendicitis and appendectomy in the pathogenesis of ulcerative colitis: a critical review." Inflamm Bowel Dis 8: 277-286. [PubMed] Naganuma, M., Iizuka, B., Torii, A., Ogihara, T., Kawamura, Y., Ichinose, M., Kojima, Y., and Hibi, T. (2001) "Appendectomy protects against the development of ulcerative colitis and reduces its recurrence: results of a multicenter case-controlled study in Japan." Am J Gastroenterol 96: 1123-1126. [PubMed] Rutgeerts, P., D'Haens, G., Hiele, M., Geboes, K., and Vantrappen, G. (1994) "Appendectomy protects against ulcerative colitis." Gastroenterology 106: 1251-1253. [PubMed] Potential Ties Between Appendicitis and Cancer Andersen, E. and Isager, H. (1978) "Pre-morbid factors in Hodgkin's disease. II. BCG-vaccination status, tuberculosis, infectious diseases, tonsillectomy, and appendectomy." Scand J Haematol 21: 273-277. [PubMed] Gledovic, Z. and Radovanovic, Z. (1991) "History of tonsillectomy and appendectomy in Hodgkin's disease." Eur J Epidemiol 7: 612-615 [PubMed] Mellemkjaer, L., Johansen, C., Linet, M. S., Gridley, G., and Olsen, J. H. (1998) "Cancer risk following appendectomy for acute appendicitis (Denmark)." Cancer Causes Control. 9: 183-187. [PubMed] Congenital Absence of the Appendix Anyanwu, S. N. (1994) "Agenesis of the appendix--case report." West Afr J Med. 13: 66. [PubMed] Chevre, F., Gillet, M., and Vuilleumier, H. (2000) "Agenesis of the vermiform appendix." Surg Laparosc Endosc Percutan Tech. 10: 110-112. [PubMed] Collins, D.C. (1955) "A study of 50,000 specimens of the human vermiform appendix." Surg Gynecol Obstet. 101: 437-445. [PubMed] Hei, E. L. (2003) "Congenital absence of the vermiform appendix." ANZ J Surg. 73: 862. [PubMed] Host, W. H., Rush, B., and Lazaro, E. J. (1972) "Congenital absence of the vermiform appendix." Am Surg. 38: 355-356. [PubMed] Iuchtman, M. (1993) "Autoamputation of appendix and the 'absent' appendix." Arch Surg. 128: 600. [PubMed] Kalyshev, I. G., Andreev, G. F., Kolenda, I. V., and Mustiatsa, V. I. (1995) "Absence of the appendix." Klin Khir. 7-8: 49. [PubMed] Manoil, L. (1957) "Congenital absence of the appendix." Am J Surg. 93: 1040-1042. [PubMed] Pester, G.H. (1965) "Congenital absence of the vermiform appendix." Arch Surg. 91: 461-462. [PubMed] Piquet, F., Elmale, C., and Elhadad, A. (1986) "Absence of the appendix. Apropos of a case." J Chir (Paris). 123 :117-8. [PubMed] Ponomarenko, V. N., and Novikova, N. A. (1978) "Rare case of absence of the vermiform appendix." Vestn Khir Im I I Grek. 121: 54-55. [PubMed] Rolff, M., Jepsen, L. V., and Hoffmann, J. (1992) "The 'absent' appendix." Arch Surg. 127: 992. [PubMed] Saave, J. J. (1955) "Absence of the vermiform appendix; report of a case discovered at necropsy." Acta Anat (Basel). 23: 327-329. [PubMed] Shperber, J., Halevy, A., Sayfan, J., and Oland, J. (1983) "Congenital absence of the vermiform appendix." Isr J Med Sci. 19: 214-215. [PubMed] Tilson, M. D. and Touloukian, R. J. (1972) "Agenesis of the vermiform appendix." J Pediatr Surg. 7: 74. [PubMed]
i don't know
Photosynthesis is carried out in which part of the cell?
The Cell, Respiration and Photosynthesis A Primer on Photosynthesis and the Functioning of Cells   Photosynthesis Photosynthesis is the process by which organisms that contain the pigment chlorophyll convert light energy into chemical energy which can be stored in the molecular bonds of organic molecules (e.g., sugars). Photosynthesis powers almost all trophic chains and food webs on the Earth.  The net process of photosynthesis is described by the following equation:  6CO2 + 6H2O + Light Energy = C6H12O6 + 6O2 This equation simply means that carbon dioxide from the air and water combine in the presence of sunlight to form sugars; oxygen is released as a by-product of this reaction.    PGA is a phosphoglyceric acid, a three carbon (C-C-C) organic acid. Grana are the stacked membranes that contain chlorophyll. RuBP is the five carbon (C-C-C-C-C) sugar-phosphate. Rubisco is the enzyme ribulose bisphosphate carboxylase/oxygenase. It is the enzyme that catalyzes the conversion of CO2 to the organic acid, PGA. It is the most abundant enzyme on Earth.  During the process of photosynthesis, light penetrates the cell and passes into the chloroplast. The light energy is intercepted by chlorophyll molecules on the granal stacks. Some of the light energy is converted to chemical energy. During this process, a phosphate is added to a molecule to cause the formation of ATP. The third phosphate chemical bond contains the new chemical energy. The ATP then provides energy to some of the other photosynthetic reactions that are causing the conversion of CO2 into sugars.  While the above reactions are proceeding CO2 is diffusing into the chloroplast. In the presence of the enzyme Rubisco, one molecule of CO2 is combined with one molecule of RuBP, and the first product of this reaction is two molecules of PGA.  The PGA then participates in a cycle of reactions that result in the production of the sugars and in the regeneration of RuBP. The RuBP is then available to accept another molecule of CO2 and to make more PGA.  Which wavelengths of the solar spectrum drive photosynthesis?   The wavelengths of sunlight between 400nm and 700nm are the wavelengths that are absorbed by chlorophyll and that drive photosynthesis.   Energy Incident on a Leaf Photosynthesis is not a very efficient process. Of the sunlight reaching the surface of a leaf, approximately:   Respiration is the opposite of photosynthesis, and is described by the equation: C6H12O6+6O2 ----------> 6CO2+6H2O+36ATP Simply stated, this equation means that oxygen combines with sugars to break molecular bonds, releasing the energy (in the form of ATP) contained in those bonds. In addition to the energy released, the products of the reaction are carbon dioxide and water.  In eukaryotic cells, cellular respiration begins with the products of glycolysis being transported into the mitochondria. A series of metabolic pathways (the Krebs cycle and others) in the mitochondria result in the further breaking of chemical bonds and the liberation of ATP. CO2 and H2O are end products of these reactions. The theoretical maximum yield of cellular respiration is 36 ATP per molecule of glucose metabolized.  **  Note that photosynthesis is a reduction-oxidation reaction, just like respiration (see the primer on redox reactions from the lecture on Microbes). In respiration energy is released from sugars when electrons associated with hydrogen are transported to oxygen (the electron acceptor), and water is formed as a byproduct.  The mitochondria use the energy released in this oxidation in order to synthesize ATP.  In photosynthesis, the electron flow is reversed, the water is split (not formed), and the electrons are transferred from the water to CO2 and in the process the energy is used to reduce the CO2 into sugar.  In respiration the energy yield is 686 kcal per mole of glucose oxidized to CO2, while photosynthesis requires 686 kcal of energy to boost the electrons from the water to their high-energy perches in the reduced sugar -- light provides this energy.  
Chloroplast
What is the name of the protective outer layer of trees?
Photosynthesis: what are chloroplasts? Photosynthesis: what are chloroplasts? Photosynthesis: what are chloroplasts? In this resource, part of the 'Photosynthesis - A Survival Guide' scheme for 11-14 pupils, students investigate chloroplasts and starch production, focusing on the learning objective “Light energy is absorbed by the green pigment in chloroplasts”. Light energy is trapped by chlorophyll; a green pigment found in small organelles called chloroplasts. Parts of a plant that contain these chloroplasts can carry out photosynthesis because they can absorb the light energy for the reaction. Students observe chloroplasts directly under the microscope using a plant such as Elodea pondweed. Following this they take a thin section of potato tissue and stain it to show starch grains. Students then use their knowledge to hypothesise how variegated leaves might affect a plant’s growth. The resource includes teaching and technical notes, a Powerpoint presentation and a students' worksheet. Diagram of a chloroplast showing starch grains (white dots)
i don't know
What liquid do plants need for photosynthesis?
UCSB Science Line UCSB Science Line Why do plants need water? Answer 1: All living things need water to stay alive, and plants are living things! Plants, however, need much more water than many living things because plants use much more water than most animals. Plants also contain more water than animals - plants are about 90% water. The amount of water a plant needs depends on the type of plant, how much light the plant gets, and how old the plant is. When plants are not watered properly they wilt. This is because of something called turgor, which is water pressure inside the cells that make up the plant's skeleton. Water enters a plant through its stem and travels up to its leaves. When a plant is properly hydrated, there is enough water pressure to make the leaves strong and sturdy; when a plant doesn't get enough water, the pressure inside the stems and leaves drops and they wilt. Plants also need water for photosynthesis. Photosynthesis is what plants do to create their food, and water is critical to this process. Water enters a plant's stem and travels up to its leaves, which is where photosynthesis actually takes place. Once in the leaves water evaporates, as the plant exchanges water for carbon dioxide. This process is called transpiration, and it happens through tiny openings in the plant's leaves, called stomata. The water from the leaves evaporates through the stomata, and carbon dioxide enters the stomata, taking the water's place. Plants need this carbon dioxide to make food. Transpiration - this exchange of water for carbon dioxide - only occurs during the day when there is sunlight. This is why you might find dew on plants in the morning. The plants contain a lot of water because all night long water has been entering through the stem and being pulled into the leaves where it can't evaporate. Since the water doesn't evaporate at night, the water has no where to go so it remains on the leaves as dew. When water evaporates from a plant during transpiration it cools the plant, in the same way the humans sweat to cool off in the heat. A mature house plant can transpire its body weight daily. This means it gives off a lot of water! If people needed that much water, an adult would drink 20 gallons of water a day. Answer 2: Actually, all living things need water because life requires a LOT of chemical reactions. The chemicals are usually dissolved in water. Also, plants put the water together with carbon dioxide to make sugar. This takes energy, which plants get from light. Water also helps plants stand up tall, even when they aren’t made of wood. They don’t have bones, but they do have cell walls and water pressure. Water comes up from the roots, but carbon dioxide doesn’t. How do you think plants get carbon dioxide? Thanks for asking, Answer 3: The plants need water because the reactions that take place in the cell to make energy require a watery medium. Answer 4: Plants need water for the same reason that all living things do: to dissolve the chemicals they use to do their biology. Plants also use a water current up the plant for transport, which evaporates water out the leaves, so they need water for that reason, too. Lastly, water is used to make sugar, and plants store energy in the form of sugar. Click Here to return to the search form. Copyright © 2015 The Regents of the University of California, All Rights Reserved.
Water
What are the young of bats called?
PHOTOSYNTHESIS PHOTOSYNTHESIS FIND OUT MORE Unlike animals, most plants do not need to find food, because they can make it for themselves. Plants use energy from sunlight to turn water and carbon dioxide into an energy-rich sugar called glucose. This process is called photosynthesis, which means “making things with light”. Photosynthesis takes place inside capsules in the leaf cells, called CHLOROPLASTS . MAKING FOOD AND OXYGEN Plants use their leaves to make food. Oxygen is created as a by-product. During photosynthesis, plant leaves take in carbon dioxide from the atmosphere. Using the energy from sunlight, this is combined with water drawn up from the roots to make glucose. Oxygen is also produced in this chemical reaction and exits the leaves into the surrounding air. FOOD-PRODUCING CELLS Different plant cells perform different tasks. Palisade cells and spongy cells are located just below the epidermis and are a plant’s main food-producers. The tall palisade cells are packed with green chloroplasts, which carry out photosynthesis. The irregularly shaped spongy cells also have chloroplasts. Air spaces between the cells are filled with carbon dioxide, water vapour and other gases. CHLOROPLAST Many leaf cells contain tiny, lens-shaped organelles called chloroplasts. These can move around the cell towards the direction of sunlight. Chloroplasts contain a green, light-capturing pigment called chlorophyll. This chemical helps the chloroplasts to act like minute solar panels. INSIDE A CHLOROPLAST Chloroplasts are made up of stacks of tiny disclike membranes called grana, held in a dense mass of material known as the stroma. The grana are where water is split into hydrogen and oxygen, using some of the light energy captured by the chlorophyll. The rest of the light energy is used in the stroma to combine the hydrogen with the carbon dioxide to make glucose. FIND OUT MORE
i don't know
What is the larva of a toad called?
Larval forms | Article about Larval forms by The Free Dictionary Larval forms | Article about Larval forms by The Free Dictionary http://encyclopedia2.thefreedictionary.com/Larval+forms Also found in: Dictionary , Thesaurus , Medical , Wikipedia . larva, independent, immature animal that undergoes a profound change, or metamorphosis, to assume the typical adult form. Larvae occur in almost all of the animal phyla; because most are tiny or microscopic, they are rarely seen. They play diverse roles in the lives of animals. Motile larvae help to disseminate sessile, or sedentary, animals such as sponges sponge, common name for members of the aquatic animal phylum Porifera, and for the dried, processed skeletons of certain species used to hold water. Over 4,500 living species are known; they are found throughout the world, especially in shallow temperate waters. ..... Click the link for more information. , oysters oyster, bivalve mollusk found in beds in shallow, warm waters of all oceans. The shell is made up of two valves, the upper one flat and the lower convex, with variable outlines and a rough outer surface. ..... Click the link for more information. , barnacles barnacle, common name of the sedentary crustacean animals constituting the infraclass Cirripedia. Barnacles are exclusively marine and are quite unlike any other crustacean because of the permanently attached, or sessile, mode of existence for which they are highly modified. ..... Click the link for more information. , or scale insects. Larvae of parasites may be dispersed by penetrating the skin of new hosts; other parasite larvae live in intermediate hosts that are normally eaten by the final host, in which the adult parasites develop. The larvae of other parasites live in and are dispersed by intermediate hosts such as mosquitoes mosquito , small, long-legged insect of the order Diptera, the true flies. The females of most species have piercing and sucking mouth parts and apparently they must feed at least once upon mammalian blood before their eggs can develop properly. ..... Click the link for more information. , gnats gnat, common name for any one of a number of small, fragile-looking two-winged flies of the suborder Nematocera, order Diptera, which includes the families Tipulidae (crane flies), Bibionidae (hairflies), Ceratopogonidae (biting midges), Chironomidae (true midges), Cecidomyidae ..... Click the link for more information. , or leeches leech, predacious or parasitic annelid worm of the class Hirudinea, characterized by a cylindrical or slightly flattened body with suckers at either end for attaching to prey. ..... Click the link for more information. ; when the blood meals are taken from the final host, the parasite larvae are introduced into the blood or skin. Parasitic infections can often be reduced by eliminating the larval hosts. Vertebrate Larvae Among vertebrates vertebrate, any animal having a backbone or spinal column. Verbrates can be traced back to the Silurian period. In the adults of nearly all forms the backbone consists of a series of vertebrae. All vertebrates belong to the subphylum Vertebrata of the phylum Chordata. ..... Click the link for more information.  a number of fishes fish, limbless aquatic vertebrate animal with fins and internal gills. Traditionally the living fish have been divided into three class: the primitive jawless fishes, or Agnatha; the cartilaginous (sharklike) fishes, or Chondrichthyes; and the bony fishes, or Osteichthyes. ..... Click the link for more information.  pass through larval stages; the larva of the eel eel, common name for any fish in the order Anguilliformes, and characterized by a long snakelike body covered with minute scales embedded in the skin. Eels lack the hind pair of fins, adapting them for wriggling in the mud and through the crevices of reefs and rocky shores. ..... Click the link for more information.  is interesting because it is flat and transparent. The tadpole, the familiar larva of the amphibian amphibian, in zoology, cold-blooded vertebrate animal of the class Amphibia. There are three living orders of amphibians: the frogs and toads (order Anura, or Salientia), the salamanders and newts (order Urodela, or Caudata), and the caecilians, or limbless amphibians (order ..... Click the link for more information. , develops to a considerable size in the relatively hospitable aquatic environment before metamorphosis prepares it for an amphibious or terrestrial life as a frog frog, common name for an amphibian of the order Anura. Frogs are found all over the world, except in Antarctica. They require moisture and usually live in quiet freshwater or in the woods. Some frogs are highly aquatic, while others are better adapted to terrestrial habitats. ..... Click the link for more information.  or toad toad, name applied to certain members of the amphibian order Anura, which also includes the frog. Although there is no clear-cut distinction between toads and frogs, the name toad ..... Click the link for more information. . Insect Larvae In some animals, especially insects insect, invertebrate animal of the class Insecta of the phylum Arthropoda. Like other arthropods, an insect has a hard outer covering, or exoskeleton, a segmented body, and jointed legs. Adult insects typically have wings and are the only flying invertebrates. ..... Click the link for more information. , larvae represent a special feeding stage in the life cycle. Some insects pass through more or less wormlike larval stages, enter the outwardly inactive, or pupal, form, and emerge from the pupal case as adults (see pupa pupa , name for the third stage in the life of an insect that undergoes complete metamorphosis, i.e., develops from the egg through the larva and the pupa stages to the adult. ..... Click the link for more information. ). The importance of larvae in the life cycle of insects varies greatly, as does the proportion of the life span spent in larval, pupal, and adult stages. In many insects, the adult life is relatively short, consisting mostly of mating and egg laying, while the larvae live for many months or, in some species, for several years. Insect larvae feed voraciously, necessarily becoming larger than the adult, as considerable energy and material are needed for the profound changes made during pupation. For this reason, insect larvae often cause far more damage to stored crops and textiles than adult insects. Insect larvae generally have a thinner exoskeleton than the adult; many are white and soft. The characteristic fly fly, name commonly used for any of a variety of winged insects, but properly restricted to members of the order Diptera, the true flies, which includes the housefly, gnat, midge, mosquito, and tsetse fly. ..... Click the link for more information.  larvae are maggots, often developing in decaying plant or animal material. Mosquito larvae are the familiar aquatic wrigglers; they breathe air and are killed by a thin film of oil on the water that prevents contact with air. Maggots and wrigglers are legless, as are all larvae of the insect order Diptera. Beetle beetle, common name for insects of the order Coleoptera, which, with more than 300,000 described species, is the largest of the insect orders. Beetles have chewing mouthparts and well-developed antennae. ..... Click the link for more information.  larvae, including the whitish forms called grubs and the long brownish wireworms wireworm, elongate, cylindrical larva of the click beetle. Most wireworms are hard and brown, but members of some species are soft and whitish. Wireworms live in rotten wood or in the ground and feed on roots and seeds, injuring potatoes, grasses, and a wide variety of ..... Click the link for more information. , are quite diverse, but all are equipped with the six legs characteristic of adults. Moths moth, any of the large and varied group of insects which, along with the butterflies, make up the order Lepidoptera. The moths comprise the great majority of the 100,000 species of the order, and about 70 of its 80 families. ..... Click the link for more information.  and butterflies butterfly, any of a large group of insects found throughout most of the world; with the moths, they comprise the order Lepidoptera. There are about 12 families of butterflies. Most adult moths and butterflies feed on nectar sucked from flowers. ..... Click the link for more information.  have wormlike caterpillars as larvae, each equipped with the six legs characteristic of adults and false legs known as prolegs to support the long abdominal section. Some, like the milkweed worm (the larva of the monarch butterfly), are relatively naked, while other caterpillars are covered by hairy bristles, sometimes equipped with irritating chemicals that can cause intense itching. The young of the social insects ( bees bee, name for flying insects of the superfamily Apoidea, in the same order as the ants and the wasps. Bees are characterized by their enlarged hind feet, typically equipped with pollen baskets of stiff hairs for gathering pollen. ..... Click the link for more information. , ants ant, any of the 2,500 insect species constituting the family Formicidae of the order Hymenoptera, to which the bee and the wasp also belong. Like most members of the order, ants have a "wasp waist," that is, the front part of the abdomen forms a narrow stalk, called the waist, ..... Click the link for more information. , wasps wasp, name applied to many winged insects of the order Hymenoptera, which also includes ants and bees. Most wasps are carnivorous, feeding on insects, grubs, or spiders. They have biting mouthparts, and the females have stings with which they paralyze their prey. ..... Click the link for more information. , and termites termite or white ant, common name for a soft-bodied social insect of the infraorder Isoptera. Originally classified in as a separate order, termites are genetically related to cockroaches and are now usually classified with them in the order Blattodea. ..... Click the link for more information. ) are legless but otherwise grublike. Although all social-insect larvae are ultimately dependent on the parent colony for food, they are considered true larvae because they pass through a pupal stage. Larva   a stage in the individual development of many invertebrates and some vertebrates (fishes and amphibians) in which the nutrient reserves of the egg are insufficient to complete embryonic development. An organism in the larval stage is self-sufficient. Usually it has special organs not characteristic of the adult form but lacks other organs characteristic of the adult. In many animals, the existence of the larval stage is determined by the differences in the modes of life of the early stages of development and that of the adult stage; thus, the trochophore, characteristic of polychaetes and many mollusks, is free-swimming, but the adult form is benthic. The presence of a larva is sometimes associated with a change in habitat in the course of development. For example, many amphibian larvae are adapted to aquatic life, whereas the adult animals are adapted to dry land. In sessile or sluggish marine animals, a free-swimming larva ensures offspring distribution. This is true of the larvae of sponges and coelenterates (paren-chymula, amphiblastula, planula) and of echinoderms and enteropneusts (dipleurula). The metamorphosis of the larva to the adult animal consists in the restructuring of the larva’s organization; the more profound that restructuring, the greater will be the difference between the larva and the adult organism. The changes that occur in the metamorphosis of certain invertebrates (nemertines, echinoderms, and insects) are especially pronounced. For example, in higher insects in the pupal stage (which follows the larval stage), almost all of the larval organs are destroyed. The organs of the adult animal are formed de novo from special rudiments called imaginal disks. The larvae of some animals retain the structural characteristics of ancestral forms. For example, phylogenetic significance of this sort is ascribed to the larvae of sponges and coelenterates (parenchymula, planula) and to the caudate larvae of ascidians, which resemble a free-swimming ancestor in structure. A. V. I
Tadpole
Which bird, a member of the cuckoo family, is often seen dashing along the highways of the southern USA and Mexico... hence its name?
Frogs and Toads God Made it All Perfect Dandy Designs Books WHAT IS THE DIFFERENCE BETWEEN A TOAD AND A FROG? Actually, all toads are frogs. They are both known to scientists as anurans, which comes from two Greek words meaning “without a tail.” However, we usually use the name “toad” for the ones that are dry and warty with shorter back legs and “frog” for the ones that are moist and smooth with webbed feet, but there are exceptions to that rule. Toads in North America are from a family of frogs called Bufonidae. They are dry and warty and somewhat laid-back, often walking rather than leaping or jumping. Of the more than 5,400 species of frogs in the world only a little more than 100 species live in the United States. There are that many different species living in just a small area of some rain forests! WHERE FROGS LIVE Frogs don't migrate. Instead they hibernate to escape from the cold or they estivate to escape from the heat or dryness. Frogs are most abundant around bodies of water with many of them living at the water's edge. However, some varieties of frogs live totally in the water, and some live in fast moving water and even waterfalls. Many species live in the tropical rain forests where it is moist all of the time. A smaller number live in grasslands, on mountains, and even in deserts. Frogs that live in areas that are dry for part of the year burrow into the ground and go into a dormant state called estivation or “summer sleep.” They stay there hidden from the heat of the sun and from predators until the rain comes when they emerge in large numbers until the dry conditions return. Some desert frogs can attach themselves to plants and turn white and hard as a rock during the dry season. When rain comes, they absorb water, wake from their sleep, and hop away. Since frogs are cold-blooded their bodies need the warmth of the sun in order to function. Frogs that live in cold climates hibernate in the winter by burying themselves in leaf litter or mud and wait for the warmth to return. The dancing frog lives in waterfalls. Because waterfalls are so noisy they can't attract a mate by singing, these frogs dance by alternately stretching their legs and wiggling their toes. Frogs that live in the mountains often have dark colors so they can absorb more heat from the sun. The dark color also protects them from the sun's ultraviolet radiation. Frogs are wonderfully designed to live in a wide variety of habitats. The European and Asian Common Toad (bufo bufo) has been found at elevations over 26,000 feet (8,000 meters) in the Himalaya Mountains and at 1,115 feet (340 m) below ground in a coal mine. Frogs may live for several years, and generally the larger frogs have longer lifespans. A FROG'S BODY Where a frog or toad lives is reflected in the design of its body. The ones with long back legs can jump great distances while the ones with shorter legs either hop short distances or walk. Some of them have powerful legs for digging or burrowing into the ground. Some even have special shovels attached to their legs like the spadefoot toads. Most burrow with their rear legs, but some who borrow head first have pointed noses. Frogs who live in the water have webbed feet to help with swimming and those who live mostly on land do not. Frogs who live in trees or fast-moving water have sticky pads on their toes so they can cling to the trees or rocks. Some of the frogs with four webbed feet use their feet like parachutes to glide down from the trees. Frogs that swim in water are often long and streamlined with pointed snouts, while those that spend their life on land and in dry areas are short and rotund. The round shape gives less surface area for reduced water loss. It makes them poor swimmers, but they don't have to swim much anyway. Frogs and toads all have lungs for breathing, but they also breathe through their skin. Frogs with damp skin depend more on skin breathing (cutaneous respiration) because the skin has to be moist for oxygen and carbon dioxide to pass through. Of course, those skin breathing frogs are also the ones who spend the most time in the water. Desert frogs have dry skin so that they won't loose moisture, and they have larger lungs because they cannot breathe through their skin. Clever design to fit the environment is the key to the amazing frogs. A FROG'S EYES Some people have said that frogs have the most beautiful eyes of any animal. They come in a variety of colors from red, orange, and yellow to metallic copper, silver, bronze, and gold. In most frogs and toads the pupil is horizontal, but many are vertical. However, some varieties of frogs have pupils that are round, triangular, heart-shaped, hourglass-shaped, and diamond-shaped. The eyes are mounted high on the sides of the head to give visibility to the front, sides, and even partially to the rear and all while the frog is mostly under water. The frog's eyes can even be lowered into the roof of the mouth to help in swallowing large food items. Frogs that are singing will abruptly stop when they see an intruder entering their area. This makes it hard to find that noisy frog. At night frogs are attracted to lights, perhaps because insects are attracted to lights and frogs love to eat insects. SOME PEOPLE LIKE FROGS - AND SOME DON'T Many frogs have large, bulging eyes and the appearance of a grin. This may make them easier to like than some creatures, such as lizards or snakes for example. Also lizards and snakes may be more hostile to people than frogs, who usually just react to people by trying to hop away. However, poison dart frogs can be deadly to anyone who touches them, and not everyone likes even the most undeadly of the frogs. People often complain about the “noise” of their chorus at certain times of year, and Carolus Linnaeus, the Swedish scientist who gave us the system of scientific naming of animals and the term “amphibian,” called frogs “foul and loathsome.” MORE INTERSTING STUFF ABOUT FROGS Frogs can range in size from almost 15 inches (37 cm) long weighing 8 pounds (3.66 kg) to as small as less than half an inch (1 cm). Kermit the Frog said, "It's not easy being green," but not all frogs are green. Frogs can be very colorful or they may be dull brown, gray, or green, designed to blend with their environment. The brightest colored frogs are the poisonous ones, which may be bright red, orange, yellow, or blue. They don't need to hide for their protection and the bright colors warn predators to stay away. The skin of a single Golden Poison Dart Frog (phyllobates terribilis) contains enough poison to kill 10 people. Fortunately most deadly poisonous frogs live in jungles where there are few people, but most frogs have at least some poison. Even common toads have parotoid glands just behind their eyes that secrete toxins that help to deter enemies. EVEN MORE INTERESTING (AND GROSS) STUFF ABOUT FROGS Periodically frogs shed their skin as snakes and lizards do. However, frogs don't leave their old skin lying around like the snakes and lizards. They eat it! The process usually starts with a stretching movement and humping of the back. The skin splits and the frog uses its legs to pull the skin into its mouth where it gulps the skin down and the process is complete in a few minutes. The animal has a fresh, shiny new skin without plastic surgery. FROG VOICES Frogs make a chorus of a thousand different sounds, both pleasing and irritating. However, the “ribbit” sound which is often associated with frogs comes only from one American species, the Pacific Tree Frog. Hollywood filmmakers have recorded that sound and used it for frog background sounds in movies set all over the world in places where that frog never lived. Other frogs produce bonks, yelps, grunts, and chirps that are far to numerous to mention here. These sounds are mostly from the males and they are created by vocal sacs that are made to resonate by pumping air over the vocal cords. HOW FROGS GET AROUND Some species of frogs prefer to walk or run, but most hop or even leap, up to as much as more than 30 times their own body length! Many frogs with sticky toe pads are excellent climbers. Tree frogs can even climb up a smooth surface like glass. They have even been seen hanging from a tree branch by one toe. Some frogs like the Indian Water Frog (Rana cyanophylctis) can even run across the surface of the water. FROG REPRODUCTION Frogs lay large numbers of eggs, usually in the water. The eggs hatch into a larva called tadpoles or pollywogs. which have a tail but no legs, have gills like a fish, and can swim freely. After a few days the tadpoles grow legs, loose their tails, emerge from the water, and look like smaller versions of their parents. WHAT FROGS EAT Every animal gets hungry. So what do frogs eat? While they are tadpoles they usually are vegetarians, eating algae and bacteria. All adult frogs and toads are carnivorous, meaning that they eat other animals. Frogs can't chew their food, so they have to swallow it whole. Most frogs have tiny teeth in the upper and lower jaw (lower jaw only in toads) but the teeth are used only to keep the prey from getting away. Frogs can eat insects, spiders, worms, slugs, and snails. They catch their prey with a long, sticky tongue. The tongue is attached at the front of the frog's mouth and the frog can quickly throw it forward to catch insects in flight. Their tongue may be up to a third of the length of the frog's body. They usually don't go after their prey. They sit quietly and wait for their meal to arrive. Larger frogs may eat mice or rats, lizards, snakes, small birds, and even other frogs and toads. Frogs who live in the water may also eat larvae and small fish. WE NEED FROGS Frogs are our friends. They eat tons of harmful insects every year. The are vital to the food chain and an important part of the web of life. Frogs depend on water for reproducing, eating, and even to some extent for breathing. Frogs don't drink, but they absorb water through their skin. Pollution in the water threatens the life of frogs and many frogs around the world are being threatened with extinction. Chemical runoff from roadways and pesticides in the soil, draining of swamps and wetlands, and the cutting of rain forests all threaten our friends the frogs. God gave us a beautiful, well designed world, with a delicate balance of life. We need to do everything we can to be good stewards of this wonderful gift. Frogs need our protection. Copyright © (2010)-(2016) John N. Clayton, DOES GOD EXIST? Written and Designed by Roland Earnst
i don't know
What grow as parasites and saprotrophs, contain no chlorophyll, and reproduce by means of spores?
Fungi kingdom - definition of Fungi kingdom by The Free Dictionary Fungi kingdom - definition of Fungi kingdom by The Free Dictionary http://www.thefreedictionary.com/Fungi+kingdom Related to Fungi kingdom: Protista kingdom fun·gus  (fŭng′gəs) n. pl. fun·gi (fŭn′jī, fŭng′gī) or fun·gus·es Any of numerous spore-producing eukaryotic organisms of the kingdom Fungi, which lack chlorophyll and vascular tissue and range in form from a single cell to a mass of branched filamentous hyphae that often produce specialized fruiting bodies. The kingdom includes the yeasts, smuts, rusts, mushrooms, and many molds, excluding the slime molds and the water molds. [Latin; perhaps akin to Greek spongos, sphongos, sponge.] fungus (ˈfʌŋɡəs) n, pl fungi (ˈfʌŋɡaɪ; ˈfʌndʒaɪ; ˈfʌndʒɪ) or funguses 1. (Plants) any member of a kingdom of organisms (Fungi) that lack chlorophyll, leaves, true stems, and roots, reproduce by spores, and live as saprotrophs or parasites. The group includes moulds, mildews, rusts, yeasts, and mushrooms 2. something resembling a fungus, esp in suddenly growing and spreading rapidly 3. (Pathology) pathol any soft tumorous growth [C16: from Latin: mushroom, fungus; probably related to Greek spongos sponge] fungic adj n., pl. fun•gi (ˈfʌn dʒaɪ, ˈfʌŋ gaɪ) fun•gus•es. any member of the kingdom Fungi (or division Thallophyta of the kingdom Plantae), comprising single-celled or multinucleate organisms that live by decomposing and absorbing the organic material in which they grow: includes the mushrooms, molds, mildews, smuts, rusts, and yeasts. [1520–30; < Latin: fungus; sponge ] fun•gic (ˈfʌn dʒɪk) adj. fun·gus (fŭng′gəs) Plural fungi (fŭn′jī, fŭng′gī) Any of a wide variety of organisms that reproduce by spores, including the mushrooms, molds, yeasts, and mildews. The spores of most fungi grow a network of slender tubes called hyphae that spread into and feed off of living organisms or dead organic matter. The hyphae also produce reproductive structures, such as mushrooms and other growths. Fungi are grouped as a separate kingdom in taxonomy. See Table at taxonomy . fungal adjective Did You Know? There's a fungus among us, as they say. And it's true—they are everywhere. You have no doubt eaten mushrooms, which are fungi. And you have eaten bread, made with yeast, another fungus. Old bread may grow mold, still another fungus. Athlete's foot and a variety of other infections are caused by fungi, but, on the good side, a fungus also produces the medicine penicillin. About 100,000 different species of fungi exist. When you see a light-colored splat on a tree or rock in the woods, it is probably a lichen, which is a fungus and an alga living in a symbiotic relationship, benefiting each other. Fungi are neither plants nor animals; they are different enough to be classified by scientists into their own unique kingdom. fungus (pl. fungi) A member of the kingdom Fungi, a group of nonmotile saprophytes and parasites. ThesaurusAntonymsRelated WordsSynonymsLegend: Noun 1. fungus - an organism of the kingdom Fungi lacking chlorophyll and feeding on organic matter; ranging from unicellular or multicellular organisms to spore-bearing syncytia organism , being - a living thing that has (or can develop) the ability to act or function independently immune reaction , immune response , immunologic response - a bodily defense reaction that recognizes an invading substance (an antigen: such as a virus or fungus or bacteria or transplanted organ) and produces antibodies specific against that antigen pileus , cap - a fruiting structure resembling an umbrella or a cone that forms the top of a stalked fleshy fungus such as a mushroom volva - cuplike structure around the base of the stalk of certain fungi hymenium - spore-bearing layer of cells in certain fungi containing asci or basidia Ceratostomella ulmi , Dutch elm fungus - fungus causing Dutch elm disease Claviceps purpurea , ergot - a fungus that infects various cereal plants forming compact black masses of branching filaments that replace many grains of the plant; source of medicinally important alkaloids and of lysergic acid black root rot fungus , Xylaria mali - fungus causing black root rot in apples dead-man's-fingers , dead-men's-fingers , Xylaria polymorpha - the fruiting bodies of the fungi of the genus Xylaria sclerotinia - any fungus of the genus Sclerotinia; some causing brown rot diseases in plants earthball , hard-skinned puffball , puffball , false truffle - any of various fungi of the genus Scleroderma having hard-skinned subterranean fruiting bodies resembling truffles stalked puffball - mushroom of the genus Tulostoma that resembles a puffball false truffle - any of various fungi of the family Rhizopogonaceae having subterranean fruiting bodies similar to the truffle slime mold , slime mould - a naked mass of protoplasm having characteristics of both plants and animals; sometimes classified as protoctists pond-scum parasite - an aquatic fungus of genus Synchytriaceae that is parasitic on pond scum potato wart fungus , Synchytrium endobioticum - fungus causing potato wart disease in potato tubers Saprolegnia ferax , white fungus - a fungus that attacks living fish and tadpoles and spawn causing white fungus disease: a coating of white hyphae on especially peripheral parts (as fins) white rust - fungus causing a disease characterized by a white powdery mass of conidia pythium - any fungus of the genus Pythium Phytophthora citrophthora - causes brown rot gummosis in citrus fruits Phytophthora infestans - fungus causing late blight in solanaceous plants especially tomatoes and potatoes clubroot fungus , Plasmodiophora brassicae - a fungus resembling slime mold that causes swellings or distortions of the roots of cabbages and related plants earth-ball , earthnut , truffle - any of various highly prized edible subterranean fungi of the genus Tuber; grow naturally in southwestern Europe coral fungus - any of numerous fungi of the family Clavariaceae often brightly colored that grow in often intricately branched clusters like coral tooth fungus - a fungus of the family Hydnaceae lichen - any thallophytic plant of the division Lichenes; occur as crusty patches or bushy growths on tree trunks or rocks or bare ground etc. Fungi , fungus kingdom , kingdom Fungi - the taxonomic kingdom including yeast, molds, smuts, mushrooms, and toadstools; distinct from the green plants true fungus - any of numerous fungi of the division Eumycota basidiomycete , basidiomycetous fungi - any of various fungi of the subdivision Basidiomycota Chinese black mushroom , golden oak mushroom , Lentinus edodes , Oriental black mushroom , shiitake , shiitake mushroom - edible east Asian mushroom having a golden or dark brown to blackish cap and an inedible stipe Lentinus lepideus , scaly lentinus - a fungus with a scaly cap and white flesh and a ring on the stalk (with scales below the ring); odor reminiscent of licorice Corticium salmonicolor , pink disease fungus - fungus causing pink disease in citrus and coffee and rubber trees etc
Fungus
Why do fish have gills?
A Contain chlorophyll a and b CHAPTER 31 FUNGI Q How many - BIOL - 1202 View Full Document Concept 31.1 – Fungi are heterotrophs that feed by absorption Fungal Nutrition and Lifestyles • Fungi are heterotrophs, but do not ingest their food o Secrete exoenzymes into their surroundings that break down complex molecules o Absorb smaller organic molecules • Fungi exhibit diverse lifestyles o Decomposers, a.k.a., saprotrophs o Parasites o Mutualistic symboints Body Structure • The morphology of multicellular fungi enhances absorption of nutrients • Fungi consist of hyphae that are grouped together into mycelia • Most fungi have cell walls made of chitin • Figure 31.2 • Some fungi have hyphae divided into cells by septa, with pores allowing cell-to-cell movement of materials • Coenocytic fungi lack septa • Figure 31.3 Q: Which of the following is a “job” not performed by any fungi? A: Producer Specialized Hyphae • Some fungi have hyphae that allow them to capture animals and penetrate the tissues of their hosts • Mycorrhizae are mutualisms between fungi and plant host o Ectomycorrhizae – surrounds root cells o Endomycorrhizae – extends hyphae through root-cell wall o Figure 31.4 Concept 31.2 – Fungi produce spores through sexual or asexual life cycles Generalized Fungi Life Cycle • Figure 31.5 Sexual Reproduction • The sexual life cycle involves o Cell fusion – plasmogamy o Nuclear fusion – karyogamy • Heterokaryotic state is between plasmogamy and karyogamy o Cells have haploid nuclei from two parents • The diploid phase following karyogamy is short lived and undergoes meiosis, producing haploid spores Asexual Reproduction • Many fungi, such as molds, produce spores asexually on conidia • Single-celled yeasts reproduce by buds • Molds and yeasts with no known sexual stage are classified as deuteromycetes, or imperfect fungi • Figure 31.7 Q: In the cells of a heterokaryotic mycelium, how many nuclei are there and what is the ploidy (chromosome #)? A: Two or more; haploid Concept 31.4 – Fungi have radiated into a diverse set of lineages • Figure 31.11 Chytrids • Fungi classified in the phylum Chytridiomycota • Saprobic or parasitic • Unique in having flagellated spores called zoospores • Figure 30.12 • Once though that chytrids were monophyletic • Molecular evidence suggests some chytrids are closely related to zygomycetes Q: When the two nuclei in the cell of a dikaryotic mycelium fuse within the fruiting body, what cell types is produced? A: Zygote Zygomycetes • Fungi in the Phylum Zygomycota • Include molds, parasites, and commensal symboints • Named for their sexually produced zygosporangia • Life Cycle of Rhizopus stolonifer o Figure 31.13 Glomeromycetes • Fungi in the phylum Glomeromycota • Form distinct type of endomycorrhizae called arbuscular mycorrhizae • About 90% of all plants have mutualistic relationship with glomeromycetes This preview has intentionally blurred sections. Sign up to view the full version.
i don't know
Which animal can move by jet propulsion?
What is a nautilus? Home Ocean Facts What is a nautilus? What is a nautilus? The nautilus is a mollusk that uses jet propulsion to roam the ocean deep. Writers, artists, and engineers have long marveled at the nautilus’s beauty and swimming abilities. The chambered or pearly nautilus is a cephalopod (a type of mollusk)—a distant cousin to squids, octopi, and cuttlefish. Unlike its color-changing cousins, though, the soft-bodied nautilus lives inside its hard external shell. The shell itself has many closed interior chambers or “compartments.” The animal resides in the shell’s largest chamber, while the other chambers function like the ballast tanks of a submarine. This is the secret to how the nautilus swims. The tissue in a canal called the siphuncle [sigh-funk-el] connects all of the interior chambers. As seawater pumps through the living chamber, the nautilus expels water by pulling its body into the chamber, thereby creating jet propulsion to thrust itself backwards and to make turns. While swimming up or down through the water column, the nautilus uses its siphuncle to suck fluid into, or draw it out of, the smaller sealed chambers, allowing the animal to adjust its overall buoyancy. According to fossil records, animals similar to the chambered nautilus have existed for about 500 million years. Although no regulations currently exist to protect them, the six living species of chambered nautilus appear to be in decline. They are trapped mostly for their attractive shells and also for the shell’s inner layer, called nacre, which is used as a pearl substitute in jewelry and trinkets. In 2013, NOAA Fisheries funded a University of Washington researcher to conduct population studies of the nautilus in Fiji and American Samoa. The research should provide a clearer picture of nautilus abundance in those areas. Search Our Facts
Octopus
What name is given to the microscopic plants found in great numbers in rivers, lakes, and oceans?
Cephalopods - Cephalopoda - The Animal Encyclopedia By Laura Klappenbach Updated November 01, 2015. Cephalopods (Cephalopoda) are a group of mollusks that include 3,300 living species. Members of this group include the octopuses, cuttlefish, squid and nautiluses. Cephalopods are exclusively marine animals. They include the largest, most intelligent and most mobile of all mollusks. Cephalopods have a large, prominent head, tentacles, large complex eyes and exhibit complex behavior. Like most mollusks, the majority of cephalopods have a mantle, a radula and breath using gills. The most obvious difference among the various cephalopods is the presence or absence of an external shell. Squids, cuttlefish and octopuses do not have an external shell. Instead, they either have an internal shell called a gladius or they lack a shell entirely. The nautilus has an external shell (in fact, it is the only living cephalopod to have an external shell). The cephalopod eye is a complex structure and rivals the vertebrate eye in its sophistication. The cephalopod is large relative to the size of its body and consists of a pupil, lens, iris and in some groups (such as octopuses) a cornea (squids and cuttlefish lack a cornea). continue reading below our video 10 Best Universities in the United States The shape of the pupil varies between the groups (octopuses have a rectangular pupil, cuttlefish have a U-shaped pupil and squids have a round pupil). Many cephalopods rely on their acute vision to detect predators as well as locate prey. Their vision is advanced enough to detect differences in the size, shape, brightness and orientation of objects. Cephalopods move in part by jet propulsion. Part of the mantle of a cephalopod forms a siphon through which water is forced. As the water pressure moves through the siphon, it forces the cephalopod forward and in this way produces jet propulsion. Cephalopods also use their tentacles to move and help maintain their velocity. Cephalopods have a beak-like structure that they use to feed. They capture their prey using their tentacles and bring it to their mouth where they use their beak to bite chunks off before ingesting their prey. Most cephalopods also have a radula which consists of several rows of teeth. Cephalopods have pigment-filled cells in their skin called chromatophores that they can expand and contract to expose or hide spots of color. This enables cephalopods to quickly change color to blend in with their surroundings or aid in courtship and other communication. In some cephalopods, chromatophores are bioluminescent and can shine light in such a way as to conceal their shadow from any predators and thereby escape detection. Cephalopods are unique among mollusks in having a closed circulatory system. Coleoids (squids, cuttlefish and octopuses) have a two gill hearts (that pump blood through the gills) and a third systemic heart (that pumps blood throughout the body). Cephalopods have a large, centralized brain, well-developed senses and are able to learn complex behavior. Their brain resides within a protective cranium made up of cartilage. Most cephalopods (except the nautilus and some octopuses) have an ink sac, a muscular bag that holds dark ink (melanin). The ink is expelled into the water where it forms a dark cloud. This enables the cephalopod to obscure themselves and to confuse predators. The inking habits of cephalopods has earned them the common name of "inkfish". The earliest cephalopods were the Nautiloids which appeared in the Late Cambrian . They were thought to be predators that occupied the top of their food chain. Most ancient cephalopods had external shells, in contrast to most living species which except for the nautilus, all either have an internal shell or lack a shell entirely. Classification Animals > Invertebrates > Mollusks > Cephalopods Cephalopods are divided into the following taxonomic groups: Nautiluses (Nautiloidea) - Although there are about 2500 known species of fossil nautiluses, only 6 species remain alive today. Members of this group have coiled spiral shell with dark orange stripes. The shell is composed of numerous chambers and the nautilus lives in the largest chamber at the open end of the spiral. Squids, cuttlefish and octopuses (Coleoidea) - There area about 794 species of squids, cuttlefish and octopuses alive today. Members of this group are soft-bodied animals with a protective outer shell or, in some species, an internal shell or no shell.
i don't know
What are the nocturnal, herding herbivores of Australia, Tasmania, and New Guinea?
ANIMAL KINGDOM :: MARSUPIAL MAMMALS :: EXAMPLES OF MARSUPIALS image - Visual Dictionary Online Visual Dictionary Online next Tasmanian devil Carnivorous scavenging nocturnal marsupial with powerful jaws that allow it to devour its prey whole (flesh, bones, fur, feathers). opossum Omnivorous nocturnal marsupial of the Americas and Australia without a pouch; its fur is highly prized. wallaby Marsupial closely related to the kangaroo and living in Australia, Tasmania and New Guinea; certain species are prized for their fur. morphology of a kangaroo koala Tailless nocturnal marsupial of Australia; this solitary tree-dweller lives in eucalyptus forests and feeds on the tree&#8217s leaves. kangaroo Herbivorous marsupial with a highly developed tail; it lives in groups in Australia and Tasmania and moves rapidly by leaping.
Kangaroo
Where in an animal would you find a mandible?
Cute Kangaroo at Singapore Zoo - YouTube Cute Kangaroo at Singapore Zoo Want to watch this again later? Sign in to add this video to a playlist. Need to report the video? Sign in to report inappropriate content. Rating is available when the video has been rented. This feature is not available right now. Please try again later. Published on Oct 3, 2013 The Cute Kangaroos are a family from the marsupial order of Diprotodontia. They are among the best-known marsupials and as typical representatives of the Australian fauna, but also live in New Guinea. Cute Kangaroos are often characterized by the significantly longer hind legs and are herbivores, which are mainly crepuscular or nocturnal. The family includes about 65 extant species, of which four are already extinct. Cute Kangaroos differ significantly in their proportions. While the largest species, the red cute kangaroo can reach up to 1.8 meters and a weight of 90 kilograms, which brings Shaggy Hare cute kangaroo only 0.8 to 1.8 kilograms and has a body length 31-39 centimeters. In nearly all species, the hind legs are longer and stronger than the front legs, except for the tree cute kangaroos, which have adapted to life in the trees and stop moving hopping, and where rear and front legs almost equal are long. The tail is long, muscular and mostly hairy, he is often used as a support or balance, but can not be used as a prehensile tail. The fur is mostly colored in gray or brown tones, there are also patterned styles, such as rock cute kangaroos. The head is elongate and relatively small compared with the height, ears are large. In the maxilla, cute kangaroos have six incisors, two in the lower jaw only. The stomach of cute kangaroos is analogous to that of the ruminants developed multilocular. It has three sections. Cute Kangaroos come to Australia including offshore islands such as Tasmania and in New Guinea before. They occupy different habitats and are in rain forests as well as to find in bush or grasslands and dry steppe and desert regions. Some species like the rock and bush cute kangaroos inhabit mountainous regions and are found at altitudes of over 3100 meters. Also in terms of social behavior and activity times the Cute Kangaroos are variable. Although most species are crepuscular or nocturnal, but to varying degrees, they are also observed during the day, about sunbathing in the afternoon. The tree cute kangaroos do not hop, but can climb well. The short-tailed Quokka and Philander move away mainly on all fours. Cute Kangaroos are herbivores that feed depending on the habitat of various plants. As with all marsupials baby cute kangaroos come after a short gestation period of around 20 to 40 days, compared with placental animals to relatively underdeveloped world. Even with the largest Kangaroo type, the Red Cute Kangaroo, the cub at birth measures only 2.5 inches and weighs 0.75 grams. Cute Kangaroos were already for the Aboriginal an important prey they hunted them for their meat ( cute kangaroo meat ) and processed their skin. On the other hand, the run of the Aboriginal slash and burn, whether for hunting or created recently for simple crops, new habitat. The Europeans did after their arrival to hunt these animals. Today most Australian cute kangaroo species are protected. The Red and gray cute kangaroos, however, have significantly expanded since the arrival of Europeans. Cute Kangaroo meat has long had a bad reputation, it was considered a poor man's food only for those who could not afford any other. In Australia, the meat itself is not very popular and is processed into animal feed. A greater threat than the hunting - which concerned only the larger species - was and is for the Cute Kangaroos the destruction of their habitat. The greatest threat to cute kangaroos is still the man. See Australian farmers in animals a threat to their existence, as they erode their fields and fight them in various ways. Drinkers are poisoned and shot many cute kangaroos. Despite clear laws in Australia after an injury to kill animals immediately, they will often brought alive to the slaughterhouses. A cute kangaroo and emu are the heraldic animals of Australia. Both animals can only move forward, what is the progress. Besides cute kangaroos are ubiquitous as a symbol of animals in Australia, for example on the emblem of the airline Qantas Airways or on the Australian one-dollar coin. Category
i don't know
What is a beaver's home called?
Beavers, Beaver Pictures, Beaver Facts - National Geographic Size relative to a 6-ft (2-m) man Please add a "relative" entry to your dictionary. Beavers are famously busy, and they turn their talents to reengineering the landscape as few other animals can. When sites are available, beavers burrow in the banks of rivers and lakes. But they also transform less suitable habitats by building dams. Felling and gnawing trees with their strong teeth and powerful jaws, they create massive log, branch, and mud structures to block streams and turn fields and forests into the large ponds that beavers love. Domelike beaver homes, called lodges, are also constructed of branches and mud. They are often strategically located in the middle of ponds and can only be reached by underwater entrances. These dwellings are home to extended families of monogamous parents, young kits, and the yearlings born the previous spring. Beavers are among the largest of rodents. They are herbivores and prefer to eat leaves, bark, twigs, roots, and aquatic plants. These large rodents move with an ungainly waddle on land but are graceful in the water, where they use their large, webbed rear feet like swimming fins, and their paddle-shaped tails like rudders. These attributes allow beavers to swim at speeds of up to five miles (eight kilometers) an hour. They can remain underwater for 15 minutes without surfacing, and have a set of transparent eyelids that function much like goggles. Their fur is naturally oily and waterproof. There are two species of beavers, which are found in the forests of North America, Europe, and Asia. These animals are active all winter, swimming and foraging in their ponds even when a layer of ice covers the surface.
Lodge
Which tissue carries sugary sap around the plant?
How Beavers Build a Lodge - BBC Animals - YouTube How Beavers Build a Lodge - BBC Animals Want to watch this again later? Sign in to add this video to a playlist. Need to report the video? Sign in to report inappropriate content. Rating is available when the video has been rented. This feature is not available right now. Please try again later. Uploaded on Dec 19, 2008 Sir David Attenborough narrates this fascinating animal video recording the way in which beavers build a lodge in just 20 days. Includes cute footage of baby beavers. Interesting wildlife video from BBC show 'Beavers: The Master Builder'. Category
i don't know
Which cells form the middle layer of plant leaves?
Leaf Structure, Function, and Adaptation About Watch and Favorite Watch Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made. Favorite Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students. Leaf Structure, Function, and Adaptation Leaves have many structures that prevent water loss, transport compounds, aid in gas exchange, and protect the plant as a whole. Learning Objective Describe the internal structure and function of a leaf Key Points The epidermis consists of the upper and lower epidermis; it aids in the regulation of gas exchange via stomata. The epidermis is one layer thick, but may have more layers to prevent transpiration . The cuticle is located outside the epidermis and protects against water loss; trichomes discourage predation. The mesophyll is found between the upper and lower epidermis; it aids in gas exchange and photosynthesis via chloroplasts . The xylem transports water and minerals to the leaves; the phloem transports the photosynthetic products to the other parts of the plant. Plants in cold climates have needle-like leaves that are reduced in size; plants in hot climates have succulent leaves that help to conserve water. Terms Full Text Leaf Structure and Function The outermost layer of the leaf is the epidermis. It consists of the upper and lower epidermis, which are present on either side of the leaf. Botanists call the upper side the adaxial surface (or adaxis) and the lower side the abaxial surface (or abaxis). The epidermis aids in the regulation of gas exchange. It contains stomata, which are openings through which the exchange of gases takes place. Two guard cells surround each stoma , regulating its opening and closing. Guard cells are the only epidermal cells to contain chloroplasts. The epidermis is usually one cell layer thick. However, in plants that grow in very hot or very cold conditions, the epidermis may be several layers thick to protect against excessive water loss from transpiration. A waxy layer known as the cuticle covers the leaves of all plant species . The cuticle reduces the rate of water loss from the leaf surface. Other leaves may have small hairs (trichomes) on the leaf surface. Trichomes help to avert herbivory by restricting insect movements or by storing toxic or bad-tasting compounds. They can also reduce the rate of transpiration by blocking air flow across the leaf surface . Trichomes give leaves a fuzzy appearance as in this (a) sundew (Drosera sp.). Leaf trichomes include (b) branched trichomes on the leaf of Arabidopsis lyrata and (c) multibranched trichomes on a mature Quercus marilandica leaf. Below the epidermis of dicot leaves are layers of cells known as the mesophyll, or "middle leaf." The mesophyll of most leaves typically contains two arrangements of parenchyma cells: the palisade parenchyma and spongy parenchyma . The palisade parenchyma (also called the palisade mesophyll) aids in photosynthesis and has column-shaped, tightly-packed cells. It may be present in one, two, or three layers. Below the palisade parenchyma are loosely-arranged cells of an irregular shape. These are the cells of the spongy parenchyma (or spongy mesophyll). The air space found between the spongy parenchyma cells allows gaseous exchange between the leaf and the outside atmosphere through the stomata. In aquatic plants, the intercellular spaces in the spongy parenchyma help the leaf float. Both layers of the mesophyll contain many chloroplasts. (a) (top) The central mesophyll is sandwiched between an upper and lower epidermis. The mesophyll has two layers: an upper palisade layer and a lower spongy layer. Stomata on the leaf underside allow gas exchange. A waxy cuticle covers all aerial surfaces of land plants to minimize water loss. (b) (bottom) These leaf layers are clearly visible in the scanning electron micrograph. The numerous small bumps in the palisade parenchyma cells are chloroplasts. The bumps protruding from the lower surface of the leaf are glandular trichomes. Similar to the stem, the leaf contains vascular bundles composed of xylem and phloem . The xylem consists of tracheids and vessels, which transport water and minerals to the leaves. The phloem transports the photosynthetic products from the leaf to the other parts of the plant. A single vascular bundle, no matter how large or small, always contains both xylem and phloem tissues.
Mesophyll
Which antipodean bird is the largest member of the kingfisher family?
Molecular Expressions Cell Biology: Plant Cell Structure - Leaf Tissue Organization   Leaf Tissue Organization The plant body is divided into several organs: roots, stems, and leaves. The leaves are the primary photosynthetic organs of plants, serving as key sites where energy from light is converted into chemical energy. Similar to the other organs of a plant, a leaf is comprised of three basic tissue systems, including the dermal, vascular, and ground tissue systems. These three motifs are continuous throughout an entire plant, but their properties vary significantly based upon the organ type in which they are located. All three tissue systems are illustrated in Figure 1, which is a cutaway drawing of a typical leaf. The dermal tissue of a plant, more specifically referred to as the epidermis, is an outer protective layer of typically polygonal cells, which helps defend against injury and invasion by foreign organisms. The epidermis of the leaf also functions in a more specialized manner by secreting a waxy substance that forms a coating, termed the cuticle, on the surface of the leaf. An adaptation unique to terrestrial plants, the cuticle functions chiefly in the retention of water. As presented in Figure 1, the cells that comprise the epidermis of a leaf are arranged very tightly together in a single stratum. Microscopic pores known as stomata are the only breaches in the otherwise continuous layer of the leaf epidermis. Each individual pore, or stoma, is, in fact, a small opening between a pair of specialized cells known as guard cells. By modifying the size of the stomata, guard cells are able to regulate gas exchange and transpiration. Such modifications are influenced by various environmental factors. For example, when the weather is unusually hot and dry, the guard cells of plants in danger of losing too much water narrow the stomata width in order to reduce evaporation from the leaf interior. In order for leaves to obtain water and minerals from the roots and for food manufactured in mature leaves to be transported to the roots and other nonphotosynthetic regions, each leaf must be connected to the overall vascular structure of the plant. Accordingly, the main vascular bundle of xylem and phloem present in the stem of a plant bifurcates into leaf traces, which are branches of vascular tissue that supply leaves. Each leaf trace further branches into the familiar veins that can often be seen along the surface of leaves, and the veins repeatedly subdivide as well. The vascular components, which serve as a basic skeletal structure in addition to functioning in the transport of materials, extend throughout the mesophyll so that the xylem and phloem are brought into propinquity with leaf tissues that carry out photosynthesis. The mesophyll is the mid-section of a leaf, located between the upper and lower epidermal layers. Not only is vasculature found in the mesophyll, but also the ground tissue of a leaf. Ground tissue comprises the bulk of a plant leaf and is generally comprised of a variety of cell types, the predominant of which are parenchyma. Often less specialized than other plant cell types, parenchyma cells are surrounded by thin, flexible primary walls and execute most of the plant�s metabolic activities. The parenchyma cells present in leaves contain chloroplasts, which are the sites of photosynthesis. In Figure 1, the mesophyll is divided into two conspicuously different regions, a characteristic common among the leaves of many dicotyledons. The upper section is termed the palisade parenchyma and consists chiefly of elongated columnar parenchyma cells that contain three to five times the number of chloroplasts as the cells that comprise the lower layer, known as the spongy parenchyma. The cells of the spongy parenchyma are irregularly shaped, allowing gases to circulate through the numerous air spaces between them to the palisade parenchyma. The stomata, which are particularly important for gas exchange, tend to be surrounded by exceptionally large air spaces. A small group of collenchyma cells are also illustrated in the mesophyll of the leaf section presented in Figure 1. As depicted, collenchyma cells occur in aggregates just beneath the epidermis and possess thicker primary cell walls than parenchyma cells. The thickness of the walls, however, does exhibit notable variation. The main function of collenchyma cells is to provide additional support to the plant, especially in areas of continued growth.
i don't know
Which microscopic organisms form the basis of marine and freshwater food chains?
Food Chains and Webs | Teaching Great Lakes Science Teaching Great Lakes Science Search Food Chains and Webs All living organisms depend on one another for food. By reviewing the relationships of organisms that feed on one another, this lesson explores how all organisms— including humans—are linked. If students understand the relationships in a simple food chain, they will better understand the importance and sensitivity of these connections, and why changes to one part of the food chain almost always impact another. Grade level: 4-8th grades Performance Expectations: MS-LS2-1 Ecosystems: Interactions, Energy and Dynamics. Analyze and interpret data to provide evidence for the effects of resource availability on organisms and populations of organisms in an ecosystem. MS-LS2-2 Ecosystems: Interactions, Energy and Dynamics. Construct an explanation about how the different parts of the food chain are dependent on each other. MS-LS2-3 Ecosystems: Interactions, Energy and Dynamics. Develop a model to describe the cycling of matter and flow of energy among living parts of the food chain. MS-LS2-4 Ecosystems: Interactions, Energy and Dynamics. Construct an argument, supported by evidence gathered through observation and experience, showing how changes to physical or biological components of an ecosystem affect populations. MS-LS2-5 Ecosystems: Interactions, Energy and Dynamics. Evaluate competing design solutions for maintaining biodiversity and ecosystem services. MS-ESS3-3 Earth and Human Activity. Answer questions about how pollution affects food chains by applying scientific principles to design a monitoring plan for minimizing the human impact on the environment. For alignment, see: NGSS Summary Lesson Objectives Describe the difference between herbivores, carnivores and producers. Answer questions about the interdependence of herbivores, carnivores and producers as members of a food chain. Answer questions about how pollution affects food chains.   Background A food chain is a simplified way to show the relationship of organisms that feed on each other. It’s helpful to classify animals in a simple food chain by what they eat, or where they get their energy. Green plants, called producers, form the basis of the aquatic food chain. They get their energy from the sun and make their own food through photosynthesis. In the Great Lakes, producers can be microscopic phytoplankton (plant plankton), algae, aquatic plants like Elodea, or plants like cattails that emerge from the water’s surface. Herbivores, such as ducks, small fish and many species of zooplankton (animal plankton) eat plants. Carnivores (meat eaters) eat other animals and can be small (e.g., frog) or large (e.g., lake trout). Omnivores are animals (including humans) that eat both plants and animals. Each is an important part of the food chain. In reality, food chains overlap at many points — because animals often feed on multiple species — forming complex food webs. Food web diagrams depict all feeding interactions among species in real communities. These complex diagrams often appear as intricate spider webs connecting the species. This lesson demonstrates that changes in one part of a food chain or web may affect other parts, resulting in impacts on carnivores, herbivores, and eventually on producers. An example of this might be the harmful effects of pollution. The point that should be made is that when something disrupts a food web, humans should try to understand and minimize the disturbance. Students should also come to recognize that humans, too, are part of this complex web of life.   Food Chains and Food Webs – Parts and Pieces Food Chains Producers Plants form the base of Great Lakes food chains. They’re called producers, because they make their own food by converting sunlight through photosynthesis. They also act as food, providing energy for other organisms. In the Great Lakes, most producers are phytoplankton, or microscopic floating plants. An example of phytoplankton is green algae. Large rooted plants, another type of producer, provide food and shelter for different organisms, fish and wildlife. Primary Consumers The next level in the food chain is made up of primary consumers, or organisms that eat food produced by other organisms. Examples of primary consumers include zooplankton, ducks, tadpoles, mayfly nymphs and small crustaceans. Secondary Consumers Secondary consumers make up the third level of the food chain. Secondary consumers feed on smaller, plant-eating animals (primary consumers). Examples of secondary consumers include bluegill, small fish, crayfish and frogs. Top Predators Top predators are at the top of the food chain. Top predators eat plants, primary consumers and/or secondary consumers. They can be carnivores or omnivores. Top predators typically sit atop the food chain without predators of their own. Examples include fish such as lake trout, walleye, pike and bass, birds such as herons, gulls and red tailed hawks, bears—and humans! Food Webs In reality, many different food chains interact to form complex food webs. This complexity may help to ensure a species’ survival in nature. If one organism in a chain becomes scarce, another may be able to assume its role. In general, the diversity of organisms that do similar things provides a type of safety, and may allow an ecological community to continue to function in a similar way, even when one species becomes scarce. However, some changes in one part of the food web may have effects at various trophic levels, or any of the feeding levels that energy passes through as it continues through the ecosystem. At the base of the aquatic food web are: Plankton Plankton are microscopic plants and animals whose movements are largely dependent upon currents. Plankton are the foundation of the aquatic food web. Plankton are vital in the food supplies of fish, aquatic birds, reptiles, amphibians and mammals. Phytoplankton Plant plankton are called phytoplankton and may be single cells or colonies. Several environmental factors influence the growth of phytoplankton: temperature, sunlight, the availability of organic or inorganic nutrients, and predation by herbivores (plant eaters). Zooplankton Animal plankton are called zooplankton. Zooplankton can move on their own, but their movement is overpowered by currents. Zooplankton may be herbivores or plant-eaters (eat phytoplankton), carnivores or meat eaters (eat other zooplankton) or omnivores, which eat both plants and animals (eat phytoplankton and zooplankton).  
Plankton
Which bird feeds with its head upside-down and its beak held horizontally beneath the water?
Aquatic food webs | National Oceanic and Atmospheric Administration Read more Zooplankton — animal planktonic forms — drift through the water grazing on the phytoplankton. These "grazers" include copepods and larval stages of fish and benthic, or bottom-dwelling, animals that make up the second trophic level. Copepods and other plankton, both animal and plant, nourish filter-feeding organisms that strain their food directly from the water such as bivalves, tube worms, and sponges. This third trophic level also includes other organisms which feed on plankton such as amphipods, larval forms of fish and crustaceans, jellies, and many types of small fish. Food for thought: Stream food web helps salmon growth The experiment revealed that adding adult salmon tissue increased the productivity of invertebrates, with positive effects that rippled throughout the food web to benefit juvenile coho salmon by increasing their growth and body size. Read more Schools of larger fish create the next trophic level. They feast on the smaller fish, wasting as much as they consume. The uneaten fish parts and waste sink to the bottom, where they may be eaten by bottom-dwelling carnivores or decomposed by bacteria and ultimately returned to nutrients usable by plants. At higher trophic levels, these large fish are food for even higher level predators called top predators. Top predators can be birds, reptiles, mammals, or even larger fish, and many are opportunistic feeders. This means that they may eat anywhere within the food chain and sometimes they even eat each other. "COPEPOD" database helps world's researchers study ocean plankton Much like a doctor will check a few key indicators (temperature, blood chemistry, etc.) to assess the overall health of a patient, ocean scientists are looking at plankton to assess the overall health of ocean ecosystems. Read more In reality, many different food chains interact to form complex food webs. This complexity may help to ensure survival in nature. If one organism in a chain becomes scarce, another may be able to assume its role. However, some changes in one part of the food web may have effects at various trophic levels, or any of the feeding levels that energy passes through as it continues through the ecosystem. Humans play an important role as one of the top predators in these food webs. It is our responsibility to ensure that our fisheries are sustainable and that we are not polluting the ocean with toxins that bio-accumulate in food chains. EDUCATION CONNECTION Education plays an important role in the health of our aquatic food webs. Whether students live inland or on the coasts, their actions affect the health of one of our major food sources. This collection contains a variety of multimedia, lesson plans, data, activities, and information to help students better understand the interconnectedness of food webs and the role of humans in that web.
i don't know
What kind of a tongue does the okapi have?
Okapi (Okapia Johnstoni) - Animals - A-Z Animals Five groups that classify all living things Animalia A group of animals within the animal kingdom Chordata A group of animals within a pylum Mammalia A group of animals within a class Artiodactyla A group of animals within an order Giraffidae A group of animals within a family Okapia Comprised of the genus followed by the species Okapia johnstoni Most widely used name for the species Okapi The domestic group such as cat or dog Mammal The specific area where the animal lives Dense mountain rainforest The colour of the animal's coat or markings Red, Brown, Black, White The protective layer of the animal Fur How long (L) or tall (H) the animal is 1.5m - 2m (4.9ft - 6.5ft) The measurement of how heavy the animal is 200kg - 300kg (440lbs - 660lbs) The fastest recorded speed of the animal 60kph (37mph What kind of foods the animal eats Herbivore The food that the animal gains energy from Leaves, Shoots, Fruit Other animals that hunt and eat the animal Leopard, Serval, Human Whether the animal is solitary or sociable Diurnal How long the animal lives for 20 - 30 years The average number of babies born at once 1 The likelihood of the animal becoming extinct Near Threatened Horizontal white stripes on rear and legs Fun Fact: Eats more than 100 different types of plant! Okapi Location Okapi Okapi Classification and Evolution The Okapi is an elusive herbivore that is found in a small pocket of tropical mountain forest in central Africa . Despite it's Deer-like appearance the Okapi is actually one of the last remaining ancestors of the Giraffe , which is the tallest animal on Earth. Along with having a relatively long neck compared to it's body size , the most striking feature of the Okapi is the horizontal stripes that are particularly visible on their behinds and give this animal an almost Zebra-like appearance. The Okapi is very shy and secretive, so much so in fact that they were not recognised as a distinct species by western science until the earth 20th century. Although they are seldom seen by people, the Okapi is not an endangered species as they are thought to be fairly common in their remote habitats. Okapi Anatomy and Appearance Like it's distant and much larger ancestor, the Okapi has a long neck which not only helps it to reach leaves that are higher up, but also provides the Okapi with a tool to both defend itself and it's territory . The Okapi has a red-brown coloured coat of fur with horizontal, white striped markings that are found on their hind quarters and at the tops of their legs, and provide the Okapi with excellent camouflage in the dense jungle. They have white ankles with a dark spot above each hoof and very thick skin to help protect them from injury. The Okapi has a long head and dark muzzle with large set-back ears which enable the Okapi to detect approaching predators easily. The Okapi also has an impressively long tongue, which is not only black in colour but it is also prehensile meaning that it is able to grab hold of leaves from the branches above. Okapi Distribution and Habitat The Okapi is found in the dense tropical rainforests of north-eastern Democratic Republic of Congo generally at an altitude that can vary between 500 and 1,000 meters, although the majority of individuals are thought to inhabit areas at roughly 800 meters above sea level. They are incredibly shy and elusive animals and rely heavily on the very thick foliage around them to protect them from being spotted by predators . The Okapi can also be found in areas where there is a slow-moving fresh water source, but the range of the Okapi is very much limited by natural barriers, with unsuitable habitats on all four sides trapping these animals into the 63,000 square kilometre Ituri Rainforest . Around a fifth of the rainforest is today made up of the Okapi Wildlife Reserve, which is a World Heritage Site. Although they are thought to be common in their native region, the Okapi has been severely threatened by habitat loss particularly from deforestation. Okapi Behaviour and Lifestyle The Okapi is a diurnal animal meaning that they are most active during the day when they spend the majority of their time roaming set paths through the forest in search of food. They are solitary animals with the exception of the time mothers spend with their calves but are known to tolerate other individuals and may occasionally feed together in small groups for a short period of time. Okapi have overlapping home ranges with males tending to occupy a larger territory than females, which is marked with both urine and by rubbing their necks on trees. Males also use their necks to fight with one other to both settle disputes over territory and to compete to mate with a female during the breeding season. Okapis are known to also communicate with one another using quiet "chuff" sounds and rely heavily on their hearing in the surrounding forest where they are not able to see very far at all. Okapi Reproduction and Life Cycles After a gestation period that can last for up to 16 months, the female Okapi retreats into the dense vegetation where she gives birth to a single calf. Like many hoofed-herbivores, the Okapi calf is usually able to stand within half an hour when mother and baby then begin starting to look for a good nest spot. They remain in their nest deep in the undergrowth for the majority of the next two months which not only helps the calf to develop more rapidly but also gives it vital protection from hungry predators . Although the female Okapi will protect and feed her vulnerable calf, the two are not thought to share the same close bond that occurs with numerous other hoofed mammals . Although they do begin to develop their white stripes at a fairly young age, the young Okapi do not reach their full adult size until they are roughly three years old. They are generally weaned at around 6 months old but may continue to suckle from their mother for more than a year. Okapi Diet and Prey The Okapi is a herbivorous animal meaning that it survives on a diet that is only comprised of plant matter. They eat leaves, shoots and twigs that are drawn into their mouths using their long prehensile tongue along with fruits, berries and other plant parts. The Okapi will even eat fungi on occasion and is known to eat more than 100 different types of plant, many of which are poisonous to other animals and Humans . Along with consuming a vast variety of plant material, the Okapi is also known to eat a reddish clay that provides essential salt and minerals to it's plant-based diet . The Okapi spends a great deal of the daylight hours in search of food and walks quietly along well-trodden paths that it uses regularly to ensure an easier escape from predators . Okapi Predators and Threats Due to the fact that the Okapi inhabits such a secluded region of mountain rainforest , it actually has surprisingly few common predators particularly in comparison to similar species . The main predator of the Okapi is the Leopard , which is one of the world's largest and most powerful felines and an animal that spends a lot of time resting in the trees. Unlike other predators which the Okapi's acute hearing would sense moving through the undergrowth, the Leopard's position above ground means that they are able to both survey the surrounding area for potential prey and are also able to ambush it from above. Other predators of the Okapi include the Serval and Human hunters in the area, but the biggest threat to the world's Okapi population is habitat loss due to deforestation. Okapi Interesting Facts and Features One of the most distinctive features of both the Okapi and the Giraffe is their long prehensile tongue which can not only be used to grab onto leaves and branches but it also assists the animal when grooming. The tongue of the Okapi is in fact so long, that they are one of the few animals in the world that are said to be able to lick their own ear! Although they are quite rare and very secretive animals , there were sightings of the Okapi in these forests but these generally involved seeing the animal from behind and so the Okapi was known by many as a Forest Zebra . The Okapi was not classified as a distinct species until 1900 - 1901, when Harry Johnston sent two pieces of Zebra-like skin to London which was analysed and meant that a new species had been recorded. Okapi Relationship with Humans Until the beginning of the last century, the Okapi itself was not known to western scientists but the native people of the region were known to hunt this rare and elusive animal for both it's meat and it's thick hide. Today this secretive animal is still seldom seen in the high mountain rainforests of central Africa both due to it's shy nature and it's excellent camouflage amongst the dense foliage, so much of what we know about the Okapi is from observations of individuals found in zoos and animal institutions around the world. This however was not really successful until the introduction of planes as the trauma endured by the animal on trains on boats often meant that there was a high mortality rate of the individuals that were captured. Okapi Conservation Status and Life Today Although they are thought to be fairly common throughout their naturally isolated range, the Okapi has been listed by the IUCN as an animal that is Near Threatened from extinction in it's natural environment . This is due to the increase of deforestation in parts of their natural habitat along with the fact that they are becoming increasingly caught on snares and other traps that are set by locals to catch other animals . The Okapi has been protected by law in the Democratic Republic of Congo (formally Zaire) since 1933, and the IUCN last estimated that there were between 10,000 and 35,000 individuals left in the wild. Share This Article Learn how you can use or cite the Okapi article in your website content, school work and other projects. First Published: 13th May 2009 [View Sources] Sources: 1. About the Okapi (Date Unknown) Available at: [Accessed at: 13 May 2009] 2. David Burnie, Dorling Kindersley (2008) Illustrated Encyclopedia Of Animals [Accessed at: 13 May 2009] 3. David Burnie, Kingfisher (2011) The Kingfisher Animal Encyclopedia [Accessed at: 01 Jan 2011] 4. David W. Macdonald, Oxford University Press (2010) The Encyclopedia Of Mammals [Accessed at: 01 Jan 2010] 5. Dorling Kindersley (2006) Dorling Kindersley Encyclopedia Of Animals [Accessed at: 13 May 2009] 6. Okapi Conservation (Date Unknown) Available at: [Accessed at: 13 May 2009] 7. Okapi Information (Date Unknown) Available at: [Accessed at: 13 May 2009] 8. Richard Mackay, University of California Press (2009) The Atlas Of Endangered Species [Accessed at: 13 May 2009] 9. Tom Jackson, Lorenz Books (2007) The World Encyclopedia Of Animals [Accessed at: 13 May 2009] Are you Safe? Are You Safe? is an online safety campaign by A-Z-Animals.com. If something has upset you, the Are You Safe? campaign can help you to speak to someone who can help you.
Prehensility
What do baleen whales eat?
Okapi Fact Sheet              Order: Artiodactyla* (nearly 200 species of even-toed, hoofed mammals)                  Suborder: Ruminantia (cud-chewing cattle, goats, sheep, bison, giraffes and more)                      Family: Giraffidae (only two species - giraffes and okapis)                          Genus: Giraffa camelopardalis (giraffe)                          Genus: Okapi (okapi)                              Species: Okapia johnstoni *New anatomical and DNA evidence on the relationship between Artiodactyla (even-toed ungulates) and Cetacea (whales and dolphins) recently led to a merging of the two orders into a new group, Cetartiodactyla (Montgelard, 1997; reviewed in Kulemzina, 2009). As of October 2012, experts had not agreed on whether to define Cetartiodactyla as an official taxonomic order that would replace Artiodactyla and Cetacea. Some continue to list okapi in the order Artiodactyla (Franklin, 2011) or use the term Cetartiodactyla without defining it as an order (IUCN, 2008).                               Atti (from Wambuti pygmy tribe) Okapi derived from the pygmy word O'Api which, when spoken by pygmies, sounds like okapi. Other Scientific Nomenclature: Okapia liebrecht originated when, in the late 1800s, Forsyth Major concluded that a specimen of skin and skulls were a different species. Okapia erikssoni was named in 1903 by Lord Rothschild who found the skin of a female okapi to be different. (note: both observations were false). Okapi were unknown to western world (occupy dense African rainforest habitats) until discovered by Sir Harry Johnston in 1901. Species name is in Johnson's honor. Taxonomy and Phylogeny. Closest living okapi relative is the giraffe. Some researchers dissent, pointing out that important differences in reproductive organs, fetuses, bile acid salts and skeletal anatomy make the okapi more likely to not belong in the giraffe family at all, but to be a closer relative of the nilgai antelope in the bovid (cattle) family. (Benirschke & Hagey 2006) (Spinage 1968) Colbert (1938) made a detailed skeletal analysis of okapi and concluded that while they differed in many respects from giraffes, that they showed many primitive features of fossils of early giraffe relatives. Giraffe family (giraffes and okapi) dates to about 15-12 million years ago (Miocene) (Dagg & Foster 1982). Some two million years ago (Pleistocene) a now-extict okapi species (Okapia sp.) lived in East Africa in present-day Tanzania. At the same time in the same place, now extinct relatives of the giraffes existed. DISTRIBUTION & HABITAT (Hart & Hart 1988) (IUCN Redlist 2008) (Bodmer & Gubista 1988) Endemic to forests of Democratic Republic of Congo, occurring between about 500 m and 1,500 m elevation on both sides of the Congo River. Okapi populations in the Ituri / Aruwimi and adjacent Nepoko basin forests, and the forests of the upper Lindi, Maiko and Tshopo Basins; also well known in the Rubi-Tele region in Bas Uele. (IUCN Redlist 2008) Limited to closed, high canopy forests, occurring in a wide range of primary and older secondary forest types. Okapi don't range into gallery forests or into forest islands on the savanna and they don't stay in the disturbed habitats surrounding human settlements. Will occupy seasonally flooded areas when the ground is still wet, but they do not occur in truly wet sites or extensive swamp forest. Tree fall gaps are selected foraging sites for okapi during the early stages of regeneration (Hart & Hart 1989). Remain solitary much of the time. 10% of total time is spent with other animals. Social grooming common in captives. 2 adults, 1 juvenile, and 1 young may inhabit the same home range. Groups of more than 3 have never been recorded other than in captivity. Calves remain within mother's home range during first 2-6 months after birth. Okapi generally avoid individuals in adjacent home ranges Males and females spend very little time together        Territorial Behavior Captive males shown to mark objects (bushes, trees) with urine, while crossing legs in a dance-like movement. Marking occurs most often during courtship. Females mark using common defecation sites. Mark territory by rubbing necks on trees Home ranges: greatest 24 hr movement is 2.5 km most stable shared mainly with other female(s) and young     Male largest home range > 10 km 24-hr movement up to 4 km     Subadults: 2 km-3 km more restricted movement Generally tranquil and non-aggressive. Males competing for females engage in ritualized neck fighting, head butting, and charging. (Prothero 2002) Aggressive behaviors include kicking, head-throwing, and slaps using the side or top of head as a blow to flank or rump. Kicking is often symbolic without contact. Dominant animals have an erect head and neck posture while subordinates may have head and neck on the ground. Locomotion (Lindsey et al 1999) (Dagg 1960) Pacing gait at about 16 kilometers/hour (10 mph) - foreleg and hindleg move forward together, followed by legs on other side Gallop gait attains speeds of about 56.3 kilometers/hour (35 mph) with same left side/ right side pattern. Like the giraffe, must splay the legs to reach the ground when drinking Play Includes gambols and capers, the pooky (head low and forward, rapid tail wags) and lie and rise (lie on ground, may roll on side, then stand up)(Bodmer & Rabb 1985) Both sexes and all age classes engage in play behavior. Infants play more frequently than adults. Communication         Displays   Dominance displays for both okapi and giraffe involve nose pointing away from the body's midline, which increases the visual impact of neck length. (Simmons and Scheepers 1996)         Vocalization (Bodmer & Rabb 1992) Vocal communication more common than in giraffes. Consists of three types of vocal signals- the chuff, moan, and bleat. Chuffs are contact calls for all ages and both sexes. Infants use bleat vocalization for response from mother. Bleats emitted only by young animals < 7 months in stressful situations. Soft moaning sound by males during courtship. Whistles and bellows in acute distress situations. Vocalizations have infrasonic frequency components.         Olfactory signals Secretes from glands of feet, leaving scent on low-lying herbage. Territory marked by urine or dung. Prior to mating, males and females sample urine to test for hormones (scientific term is flehmen). Intraspecies Interaction (Spinage 1968) (Bodmer & Rabb 1992) Leopards represent significant cause of death for adult okapi. Serval cat and golden cats prey on young okapi. African rainforest natives use okapi skins for decorative belts. DIET & FEEDING (Bodmer & Gubista 1988) (Bodmer & Rabb 1992) (Crissey et al 2001) (Hart 1992) (Hart & Hart 1988) Highly selective feeder on leaves, fruits, seeds, ferns, fungi of some 100 plant species. Prefer to browse in small forest openings where fallen trees allow growth of light-dependent plants; prefer fast-growing tree seedlings, shrubs and vines Okapi plants are only a temporary resource, scattered widely across the forests; most plants are not acceptable forage Do not choose shade-tolerant shrubs and select only a small proportion of all the plants available. Like giraffes, okapis use long, prehensile tongue to pull leaves off branches; a slender muzzle and flexible lips also help with choosing the "right" plants. Also ingest clay for its minerals, burnt charcoal, and bat guano found in hollow trees. Digestive system similar to other browsing ruminants. As in giraffes, gall bladder not present. Daily food intake (dry matter) of captive okapi ranges from 4.3-5.0 kg REPRODUCTION & DEVELOPMENT (Gijzen & Smet 1974) (ISIS Web Site) Captive Breeding Okapis breed readily in captivity, but rearing of calves has been problematic. Until the 1950s, roughly 50% died during the 1st month. Antwerp Zoo was the 1st to exhibit the okapi to visitors (1919); first okapi to survive in captivity also at Antwerp Zoo in 1928; she lived 15 more years. A male okapi named Congo came to the Bronx Zoo in 1937 and lived for 15 years. Common in many large zoos; currently being bred to maintain numbers; difficult to keep because of delicate health Captive okapi population in North American zoos has 25 founders; in Europe there are 23 founders. Baruiti was born at the San Diego Zoological Park in 1962. POPULATION AND CONSERVATION STATUS (Gijzen & Smet 1974) (Hart and Mwinyihali 2001) (IUCN Redlist 2008) Okapi has become the flagship species for the conservation of the lturi ecosystem in the Congo Basin. In Uganda, Okapi formerly occurred in the Semliki Forest, but is not known to survive there (IUCN Redlist 2008). 1925 Virunga National Park established; Africa's first national park. A 2006 survey by local trackers and by World Wildlife Fund and its Congolese governmental partner ICCN (the Congo Institute for Nature Conservation), and the Gilman Conservation International found okapi signs in Virunga National Park in Eastern Congo after no sightings there since the 1950's. In 2008 camera trap images of okapi obtained for first time; cameras set up in Virunga by Zoological Society of London and the Congolese Institute for Nature Conservation (ICCN). 1933: Okapi protection begins officially in Congo/Zaire. 1952: A captive breeding centre for okapi was first established at Epulu in the Ituri Forest, Zaire (now Democratic Republic of Congo or DRC) 1970: Maiko National Park established in DRC; it is not a World Heritage Site, but may have the most biodiversity of all the Congo's parks. 1987: Okapi Conservation Project begun by Gilman International Conservation to help protect native habitat in Ituri Forest of DRC. 1992: The Okapi Wildlife Reserve established Occupies 13,700 square kilometers (5,290 square miles) in DRC Is a Pleistocene refuge of exceptional species richness with a greater variety of mammals than any park in Africa: 15% of species are endemic which is one of the highest rates in the world Until recently preserved only by its inaccessibility; It has the highest known density of okapis known anywhere at approximately 2.5 animals per square kilometer (2. 5 per square mile) 1996: Okapi Wildlife Reserve designated as a United Nation World Heritage Site Within the reserve, some 5,000 okapi are protected This reserve encompasses the cultural center for two tribes of forest pygmy people - the Mubuti and Efe; okapi are not a significant part of their traditional diet, Strengthening protection of this reserve and Maiko National Park is the single most important means to ensure long-term survival of Okapi (IUCN 2008) 1998: Okapi Wildlife Reserve placed on list of World Heritage in Danger because of devastation by civil war, invasion by miners and militants and destruction of wildlife by hunting for bushmeat and ivory. 2008: Report for World Heritage in Danger List: populations of the endemic okapi in Okapi Wildlife Reserve have decreased by 43 %, with a loss of an estimated 2,000 animals (http://cmsdata.iucn.org/downloads/drc___okapi___dec_32_com_7a.pdf ) 2009: Population estimates are quite imprecise but may be between 10,000 and 35,000 individuals. IUCN status: (2009) Near Threatened (version 3.1); population trend stable Threats to Survival Primary threats: Armed conflict/war/civil unrest/displaced human populations (Hart and Mwinyihali 2001) Other Web Resources Okapi Conservation Project - managed by Institute in Congo for the Conservation of Nature (ICCN), oversees the Okapi Wildlife Reserve in the Democratic Republic of the Congo Okapi Faunal Reserve (Encyclopedia of Earth) - Ituri forest ecology, excellent bibliography Okapi Wildlife Reserve (UNESCO - World Heritage List) - description of the reserve, documentation of goals, decisions, concerns, achievements to protect okapi and their habitat ©2009 San Diego Zoo Global. Disclaimer: Although San Diego Zoo Global makes every attempt to provide accurate information, some of the facts may become outdated or replaced by new research findings. Questions and comments may be addressed to [email protected] .
i don't know
Which South American vulture can have a wing span of up to 3 meters and a body weight of up to 13 kilos?
Vulture | San Diego Zoo Animals & Plants San Diego Zoo Animals & Plants FAMILIES: Cathartidae (New World) and Accipitridae (Old World) GENERA: 14 SPECIES: 23 ABOUT What makes a vulture? They may not be the prettiest birds of prey, but the world would be a smellier place without vultures!  All vultures have a wide wingspan, which allows them to soar for long periods of time without flapping so much as a feather while looking for carrion to eat. They all have a sharp, hooked beak for ripping apart meat. Vultures are large compared to other birds. Their bald head and neck serve a useful purpose, allowing vultures to steer clear of infection and tangled feathers when eating decaying meat. A strong immune system allows vultures to eat rotting and possibly infected meat without getting sick. These unusual birds are divided into two groups: New World vultures, which are from North, Central, and South America; and Old World vultures, which live in Africa, Asia, and Europe. New World vultures have a distinctive bald head, an adaptation that helps reduce the risks of disease, because bacteria could become lodged in feathers, while the bald head and neck may be disinfected by the sun’s rays. New World vultures have nostrils that are long and horizontal, with a space between them. They do not have a voice box, so they cannot make any sound except hisses and grunts. New World vultures don’t build nests; instead, they lay their eggs in holes on high rocky surfaces or in tree cavities.  Some examples of New World vultures are turkey vultures, black vultures, king vultures,  California condors , and  Andean condors .  Old World vultures look like their eagle and hawk relatives. They have large, grasping talons, a voice box to vocalize with, and build nests made of sticks on rocky platforms or in trees. Old World vultures have also been around longer than the New World vultures. They have stronger feet than the New World vultures, which have feet that are not designed for grasping, and large, broad wings that allow them to stay aloft for most of the day, and a large, powerful beak with a hooked tip.  Some other examples of Old World vultures are Himalayan, Egyptian, hooded, Indian black, and palm-nut vultures, and Egyptian or Eurasian griffons. Although New World vultures are unable to make more than hissing and grunting sounds, Old World vultures can be quite vocal when feeding at a carcass, making lots of grunts, croaks, screeches, and chatter. White-backed vultures croak plaintively or squeal like pigs during a meal. Bearded vultures scream while rolling and twisting in flight during courtship. Not many animals threaten vultures. Covered as they are with bacteria, they would make most predators sick if eaten. Other scavengers may threaten the vulture, mainly to get better access to a shared carcass. Vultures tend to gorge themselves, often to the point of being unable to fly. If they feel bothered as they stand about digesting their food, they simply regurgitate to lighten the load and fly off. Many people look at the vulture as a sign of death, but some cultures admire the birds. Ancient Egyptians connected the Egyptian griffon to their goddess Nekhbet, guardian of mothers and children. Griffon images are found in early Egyptian paintings and drawings and even had a place on the crown of the pharaoh, alongside the cobra. In Native American culture, California condors are important in mythology and burial rituals.  Vultures are also important in India, as they help remove dead animals without spreading disease. In other parts of Asia, religious and cultural traditions call for the carcasses of domestic animals to be left out for “disposal” by vultures. In some regions, even human remains are left out for the vultures prior to burial.  Vultures may not have the cleanest job, but you will never hear them complaining! HABITAT AND DIET Home is where the food is! Vultures are pretty flexible when it comes to their habitat, as long as there is food, although you won’t find them in Australia, polar regions, or most small islands. They are pretty adaptable in different environments. Vultures are scavengers. They usually eat carrion, although sometimes they attack newborn or wounded animals. It seems that once food is located, the information is relayed rapidly to other vultures in the vicinity. Once one vulture lands, many more land and join in the feast. Because vultures are generally quiet, it is likely that the information is passed visually by behavioral cues for one bird to another until a regular orgy of eating begins. Vultures tend to look at any meal as a Thanksgiving meal, so they eat as much as they can, because they never know when they will find their next meal—and it could be as long as two weeks before they do! Like many other birds, vultures have a throat pouch called a crop that can store food for eating at a later time or can be regurgitated to feed to their young. Different vulture species have their own flair for getting their food, and their ability to use “tools” to get that food is unique among birds. The Egyptian vulture breaks open ostrich eggs by dropping stones on them. Lammergeier or bearded vultures carry bones into the air and drop them onto favored rocky areas to break them open; the birds then fly down to eat the nutritional marrow inside, using their specialized tongue. In Africa, many different vulture types gather at a carcass to eat, but there is a pecking order: larger vultures get to eat first while the smaller ones wait their turn.  Turkey vultures are found all over North and South America, found in open country, woodlands, farms, and in our backyard at the San Diego Zoo Safari Park! They are about 27 inches (70 centimeters) tall and have a wingspan of about 5.6 feet (1.7 meters). These vultures have a big, dark-brown body, a red, bald head, and pink legs and feet. They hunt for their food by smell, feast in groups, and prefer only the tender meat. Turkey vultures are a very important part of North America’s cleanup crew. At the San Diego Zoo and San Diego Zoo Safari Park, the vultures are fed rabbits, rats, cow spleen, oxtails, and a fortified meat-based commercial carnivore diet. FAMILY LIFE By day, vultures spend their time in the sky searching for carrion. Unlike other raptors that must be able to maneuver quickly to swoop down on prey, vultures only have to stay in flight for hours at a time, high enough to have a good field of vision to spot food. They’re specially built for this task with a huge expanse of wing feathers and a short tail. They rarely flap their wings, preferring to soar gracefully on air currents. If they hit a warm pocket of air in the sky, called a wind thermal, some vultures have the ability to soar for hours without once flapping their wings!  At night, they settle down to roost. White-backed vultures roost in a tree with a group of 10 or 12 others. Bearded or lammergeier vultures live in groups of up to 25 birds. Ruppell’s griffons are social creatures that live in colonies that may have as many as 1,000 pairs; the colony is noisy in the evening with its members’ loud, harsh calls. Turkey vultures are only social when roosting; their roost may be a single, large tree used year after year. Vultures try to attract a mate by soaring in the sky around each other. The male shows off his flying skills by almost touching the female’s wing tip as he flies by to impress her. Normally, vultures are social birds that hunt in flocks, but when it comes to starting a family, they pair for life. Old World vultures make a nest in an overhung ledge, rocky crag, or tree, constructed with sticks and perhaps lined with grass. New World vultures don’t bother with nest making at all and merely lay their egg(s) in a cave or under a rocky overhang, on the ground under bushes or rock piles, or in tree hollows or fallen logs. The mother vulture lays one egg, typically, if she is one of the larger species, two eggs if she is one of the smaller vulture species. Both parents work as a team to incubate and feed the chicks. Of course, chicks don’t drink milk; instead, the parent regurgitates meat into the chick’s mouth to feed it.  When the chick is young the parents pick up small pieces of food and feed them to the chick, but as it gets older, it picks up these regurgitated food items on its own. Chicks remain in the nest for two to three months and continue to depend on their parents for a period of time after fledging, until they learn how to find and compete for food on their own. By the time they are ready to fledge, at three to six months old, the chicks are nearly the same size as their parents and are fully feathered, but their coloration is different. AT THE ZOO Vultures of all types have been in San Diego Zoo Global’s collection since our earliest days, starting with turkey and king vultures in the early 1920s and black vultures in 1926. Two Egyptian vultures arrived at the Zoo in 1947. In the 1930s, a Southern California resident found a vulture nest and took its two chicks, which he believed to be California condors. The youngsters were confiscated by authorities and brought to our Zoo when they were a bit older, where it was determined that they were turkey vultures. We released them back into the wild when we felt they were old enough to venture out on their own. However, a few days later, we noticed that there were a lot of vultures hanging out in our trees, and one of them kept trying to get in our vulture aviary—it was one of the youngsters, and it had alerted its friends that there was free food to be had!  In 1982, as we geared up to help save the highly endangered California condor, we honed our husbandry techniques on Andean condors as well as king, turkey, and black vultures in the San Diego Zoo Safari Park’s newly built “Condorminium” complex. One local wild turkey vulture squeezed under and around fencing and gates to be inside with the Andean condors! He was soon placed in a room of his own that featured a specially prepared diet brought to him, the company of kin, and the opportunity, whenever he wanted, to exercise his wings. Today, both the San Diego Zoo and San Diego Zoo Safari Park have Andean and California condors. In addition, the Safari Park is home to several Old World vulture species: Ruppell’s and lappet-faced vultures can be seen from the Africa Tram tour; hooded and Egyptian vultures are on exhibit in Africa Woods. The Park has been successful in breeding many vulture species over the years: king, Ruppell’s, and hooded vultures, and Andean and California condors. Our  Condor Cam  gives viewers a look at a Calfornia condor family at the Park. Despite the vulture’s seemingly messy eating habits, it is a clean bird and bathes frequently. You may see our vultures bathing in their exhibit pool. Some of them even like getting doused with the hose by their keepers! You may also see wild turkey vultures that hang around the Safari Park. They have excellent eyesight and can easily spot a keeper walking to an exhibit with a “free lunch.” CONSERVATION Things we humans put in our environment seem to be causing rapid declines in vulture populations. For example, in India and other parts of South Asia in the 1990s, huge numbers of vultures died. In 2000, San Diego Zoo Global joined the investigation to determine what was killing the birds. The outcome surprised everyone: an anti-inflammatory drug used by veterinarians and ranchers to help livestock. The vultures were eating livestock that had been treated by the drug, became sick, and died. Efforts were made to restrict the use of the drug in livestock. Poachers in Africa and Asia are known to poison the carcasses of elephants they have killed, after they take the tusks, to kill any vultures that come to clean up the mess, believing the gathering birds might reveal the scene of the crime. Ranchers frustrated by large cat attacks on their livestock poison cattle carcasses to kill the carnivores, but the poison also kills the vultures that come to pick the carcass clean. And vultures die from eating meat killed by hunters using lead bullets; the birds slowly die from lead poisoning. A study done in southern Africa revealed that about 25 percent of the white-backed vultures Gyps africanus tested had high levels of lead in their blood. The California condor Gymnogyps californianus, white-rumped vulture Gyps bengalensis, Indian vulture Gyps indicus, slender-billed vulture Gyps tenuirostris, and red-headed vulture Sarcogyps calvus are at critical risk. To conserve these graceful scavengers, breeding programs, education, and awareness programs have been started for endangered vultures by organizations like The Peregrine Fund and Vulture Rescue. San Diego Zoo Global is significantly involved in the California Condor Recovery Program.  There are people fighting for these birds, and so can you! Place trash in the right bin, don’t use dangerous chemicals, dispose of harmful substances responsibly, and recycle. If you are a hunter, please use non-lead bullets. These are all ways that you can help wildlife, including those misunderstood vultures. You can also help us bring vulture species back from the brink by supporting the  San Diego Zoo Global Wildlife Conservancy . Together we can save and protect wildlife around the globe. LIFE SPAN
Condor
Which part of a beetle's body is a skeleton?
Fast Facts of the most deadly, highest, slowest, tallest, rarest Largest Largest Sea Cucumber Members of the genus Stichopus have been measured up to 40 inches long and 8 inches in diameter. Largest Living Organism Humongous Fungus . Estimated to cover over 2,200 acres and at least 2400 years old. (Oregon) Thanks to Michael Rogers for this fast fact.  Largest Living Organism (second) Grove of Aspen Trees in Utah. Named "Pando". 660 Tons, 200 acres Largest Insect Egg Malaysian stick insect (Heteropteryx dilitata) at .05 inches.  Largest Invertebrate Architeuthis dux (Giant Squid) Up to 59 foot and over 1 ton (so far) Largest Gastropod Syrinx aruanus. Australia. In 1979, a 40 pound animal was found with a shell that measured 30.4 inches in length and 39.75 inches in girth. Largest Clam Giant Clam is 4ft. 6in. across  Largest Sea Urchin Sperosoma giganteum Test diameter of 38 cm (13 inches) Largest Wave (Ocean) 524m high hit Lituya Bay on July 9, 1958 caused by a 8.3 Earthquake  Largest Octopus Pacific Giant Octopus with an average arm span of 8ft. 2in. Largest Carnivore African Bush Elephant reaches 13 feet high and weights 8 tons. Largest Frog African Goliath Frog is about 11 inches and about the size of a rabbit. Largest Amphibian  Chinese Giant Salamander grows about 6ft.in length weighs 132  Largest Bear Kodiak Bear and weighed 1600lbs.  Largest Snake Reticulated Python  averages between 26-32 feet long  Largest Bug Stick Insect averages 15 inches long  Largest Reptile Saltwater Crocodile averages 16 feet long and 1,150 pounds Largest Lizard  Komodo Dragon  at 9-10 feet long and up to 300 pounds Largest Dolphin Killer whale (Orca) at 32 feet long and  21,000 pounds Largest Shark Great White Shark  reaches up to 21 feet and over 2 tons Largest Shark on record Whale Shark. 59 feet long, captured in Thailand in 1919. Largest Invertebrate Giant Squid reaches up to at least 60 feet long Largest Rodent (ever) Phoberomys Pattersoni. Now extinct weighed over 1500 lb !!!.  Largest Rodent (current) Capybaras  in South America weighs about 100 lbs.   Largest Sea Star Evasterias echinosomo 96 cm (37.79 inches) in diameter, weight 5 kg (11 pounds), collected in the North Pacific Largest Turtle  Trumpeter Swan. Wingspan up to 9 feet.  Largest Seaweed Macrocystis pyrifera, a brown algae called the giant kelp. The longest recorded length is 54 metres long! M. pyrifera is the type of kelp that makes up the majority of the giant kelp forests off the California coast. Largest Sponge Xestospongia muta, the barrel sponge, found in tropical coastal waters. Some individuals in the Caribbean measure 6-8 feet tall, and 6-8 feet across. Largest Jellyfish Lion's Mane Jellyfish. Their Bells can reach 8 feet across and their tentacles can be more than 200 feet long Largest Mouth on Land Hippo. They can open their mouth 180 degrees. Largest Mouth anywhere Bowhead Whale Average up to 60 feet long and 20 foot mouth. Largest Bird Egg Flemish Giant. 11-14 Lbs. Largest Gastropod Tridacna derasa, found on coral reefs in the South Pacific. One was collected on the Great Barrier Reef in 1917 that measured 49 inches by 29 inches, and weighed 579.5 pounds. The shell of a close relative, Tridacna gigas, was found off Ishigaki Island, Okinawa, Japan in 1956 measuring 45.25 inches in length and weighing 734 pounds Largest Flower Amorphophallus titanum height of 6 feet or more and opens to a diameter of three to four feet.  Largest Tree (USA) Giant Sequoia in California's Sequoia National Park, 2000 to 2100 years old. Measures 275 feet tall and 30 feet across. Largest Tree (world) Australian Eucalyptus at 435 ft tall Largest Little Cat  Goliath Bird eater. As large as 11 inches in leg span Largest Moth Atlas Moth. Wingspan of 12 inches. Largest Invertebrate The Atlantic Giant Squid. Can weigh as much as 2 metric tons. Largest Crustacean Giant spider crab Macrocheira kaempferi Individuals can measure 12-14 inches across the body, with a claw span of 8-9 feet.  Largest Bat Species Giant Fox bat is the largest bat in the world with its wingspan reaching more than six feet. Largest Bat Colony Bracken Cave in central Texas holds about 20 million individuals. Largest Appetite Blue Whale.  A blue whale eats up to 4 tons of krill everyday. Largest Community Nest (bird) African Social Weavers. 100 chamber nest. Measured 27 ft long x 6 ft high Largest Tree Nest (bird) Bald Eagle (Florida). 20 ft deep & 9.5 ft wide. Weighing almost 3 tons. Largest Bird Nest (Ground)  Dusky Scrubfowl. 36 ft wide and 16 ft high. Over 300 tons of forest floor litter Largest Vine Great Vine at Hampton Court Palace, Greater London at 7 ft 1 in high and 114 ft long. Largest Weed Giant Hogweed. 12 feet tall with leaves as much as 36 inches long. Largest Tree Girth (ever) European Chestnut at 190 foot.  Largest Crocodilian Estuarine or Saltwater Crocodile Crocodylus porosus at 23 ft long and larger. Largest Eye Atlantic Giant Squid at est. diameter of 20 inches. Largest Insect Egg Malaysian Stick Insect at 0.5 which is about the size of a peanut. Largest Dinosaur Egg Hypselosaurus priscus had eggs about 12 inches. Largest Claws (Ever) Therizinosaurus cheloniformis at 36 inches. Largest Prehistoric Fish Carcharodon megalodon is estimated at 45 foot Largest Trilobite Fossil trilobite 445 million-year-old specimen measures over 27.5 in in length. Largest Marine Reserve Heard Island and McDonald Islands Marine Reserve, Australia.  25,096 square mile. Largest Freshwater Fish Arapaima gigas (Catfish)  in the Amazon River. Up to 15 feet and 440 lbs Largest Carnivorous Plant Nepenthes, Vines grow as long as 10 meters.  Largest seaweed (world) Brown Algae called kelp. Macrocystis pyrifera called giant kelp. up to 54 meters long Largest Coral Reef composed of roughly 3,000 individual reefs and 900 islands, 161 miles and can seen from outer space.   Shire called Samson. He stood 21.2 and a half hands ( 7 ft 2 inches) Tallest Dog Irish Wolfhound can reach an average of 7 feet when standing. Tallest & Biggest Bird Ostrich  averages 9 feet tall and weighs 345 pounds Largest Cow Breed Chianina. Bulls can reach 6 feet high. Tallest Land Animal Giraffe averages 19 feet tall Tallest Tree Nest (bird) Marbled Murrelet at up to 148 foot. Tallest Mammal Giraffe (Giraffa camelopardalis). 19 feet tall but average 18 feet. Tallest Dinosaur Sauroposeidon. About 60 Feet Tall. Tallest & Biggest Bird Ostrich  averages 9 feet tall and weighs 345 pounds Tallest Tree Nest (bird) Marbled Murrelet at up to 148 foot. Tallest Jumper (land animal) Pumas.  15 feet high. Or 5 times it's own height Tallest Jumper (body size considered) Flea. They can jump 100 times their own height Tallest Volcano (world) Hawaii's Mauna Loa,  volcano is the world's largest mountain in cubic content with an estimated volume of 9,600 cubic miles and is 13,681 feet high. Tallest Mountain (world) Mt Everest on the border of Nepal, Tibet, and China is the world's tallest mountain and highest elevation with a peak at 29,035 feet   Rhabdomolgus ruber, North Sea 10 mm (0.39 inches) in length Smallest Wild Cat Breed Rusty-Spotted cat from Sri Lanka.  Smallest Rabbit (domestic) Netherland Dwarf at only 1 1/2 to 3 lbs.  Smallest Rodent Pygmy Jerboa at only a couple inches.  Smallest Bat  Kitti�s hog-nosed bat Craseonycteridae thonglongyai  has a wingspan of less than 6 inches. Smallest Incubation Period (bird) Small Passerines. Only 11 Days  Smallest Crab Pea crabs in the family Pinnotheridae are about .25 inches across the shell Shortest Marine Fish Schindleria praematurus, 12-19 mm in length, weight 2 mg Smallest Turtle Speckled Cape Tortoise. Shell length of between six and 9.6. Smallest Breed Horse Falebella of Argentina. The tallest one stands only 30inches at the shoulder. Smallest Pony "Little Pumpkin." He stood 14 inches and weighed only 20 lbs! Smallest Cow Breed Tanzanian parasitic Wasp it has a wingspan of 0.2mm. Smallest Sea Urchin Echinocyamus scaber Test diameter of 5.5 mm (0.21 inches) Smallest Pinniped Baikal Seal. Adults are 4 feet 6 inches and 140 pounds Smallest Fish Dwarfgoby.  Mature females reach only 8-10mm Smallest Crab Pea crabs in the family Pinnotheridae are about .25 inches across the shell Smallest Egg Clutch Albatrosses. They only lay an egg every 2 years. Smallest Egg West Indian Vervain Humming Bird. Only 0.39 in in length and 0.0132 oz Smallest Reptile British Virgin Islands gecko is only 7/10 in long Smallest Nest (bird) Cuban Bee and Vervain Hummingbirds. 0.78 in wide and  1.2 in deep. Smallest Brained Dinosaurs Stegosaurus at about 2.5 oz   Sooty Terns. They can go 3-10 years without landing.  Longest Fish Whale Shark  averages 41 1/2  feet long. Longest Worm Boot Lace Worm. More than 180 feet long. (1864)  Longest Pregnancy 38 Months. Alpine Salamander of Southern Europe. Longest Incubation (egg) Australian Lyrebird takes 50 days. Longest Flying (land bird) Common Swift. Can go up to 3 years without landing. Longest Time Underwater (bird) Emperor Penguin can stay underwater for 18 minutes. Longest wing span  Wondering Albatross . Measuring 10ft. 6in. from wingtip to wingtip   Longest Hibernation   Marmots can Hibernate for 9 months a year  Longest Migration (Bird) Artic Tern fly's over 25,000 miles to the southern ocean.  Longest Migration (Mammal) Gray Whale migrates about 12,500 miles per year. Longest Migration (Insect) Desert Locust travels about 2,800 miles yearly. Longest lived bird (captivity) Sulfur-Crested Cockatoo at over 80 years old. Longest Living Animal Giant Tortoise lives up to 177 years Longest Migration (Land animal) Caribou travels about 3,700 miles yearly. Thanks Ed Pottie for correction Link for Fact Here Longest Migration (Butterfly) Monarch at about 2,000 miles. Longest recorded tail feathers Japanese Phoenix. With proper care my grown to 15-20 feet Thanks Micheal Leishman Longest Tail Feathers Crested Argus Pheasant at 5 feet 7 inches Longest Snake Reticulated Python. Averages longer than 20 ft 6 in. Record 32 ft 9 1/2 in  Longest Fangs (Snake) Gaboon Viper. The snake was 6 ft long with 2 inch long fangs. Longest Venomous Snake King Cobra. Averages 12-15 foot long. Record was 18 ft 9 inches. Longest Tail Dinosaur Diplodocid Diplodocus at 43-45 ft long. Heaviest Living Organism (world) Pando aka The Trembling Giant in Utah, an Aspen tree. Est at 6,000 tons   Anaconda  was weighed in at 27 feet, 9 inches long and a big 500 pounds Heaviest Reptile  Marine Leather Back Turtle weighing at 700kmg. (0.8 metric tons) Heaviest Turkey The largest domesticated turkey weighed 81 Lbs.  Heaviest Bear Polar Bear at 2200 something pounds Heaviest Cat (on record) Heaviest Mollusc (and heaviest invertebrate) The giant squid (Architeuthis sp.) The largest giant squid ever recorded (Architeuthis princeps) was captured in 1878. One of the tentacle measured 35 feet long. It is estimated that the animal weighed in the neighborhood of 4000 pounds. Heaviest Fish (bony fish) Ocean Sunfish. Killed in 1908. 10 feet in length, 14 feet between dorsal and anal fins, 4,928 pounds. Struck by a boat. Heaviest Bug Goliath Beetle. About 4.5 inches and 3.5 oz. Heaviest Bird Of Prey Andean Condor at about 20-27 lbs. Heaviest Crustacean American or North Atlantic Lobster at 44 lbs 6 oz. Heaviest Living Organism (world) Pando aka The Trembling Giant in Utah, an Aspen tree. Est at 6,000 tons   Chimaera. These fishes evolved 400 million years ago during the Devonian Period Oldest Pinniped Ringed Seal at 43 years old. Oldest Living Tree (NEW) Pine Tree in California named Methuselah.Over 4700 years old. Kept secret for it's protection.  Oldest Tree Giant Sequoia in California's Sequoia National Park, 2000 to 2100 years old. Measures 275 feet tall and 30 feet across. Oldest Dog Breed Saluki. of Egypt dating back to 7000 - 6000 BC Oldest Cat (on record) Tabby Cat. 34 years old. Oldest Fish The Common Eel at 88 years. Oldest Chelonian Madagascar radiated Tortoise lived to at least 188 years. Oldest Living Organism (world) Pando aka The Trembling Giant in Utah, an Aspen tree. Est. 80,000 years old.  
i don't know
What is the name of the structures which allow leaves to breathe?
The Open Door Web Site : Biology : How Plants Breathe : The Differences in the Exchange of Gases between Plant Respiration and Photosynthesis Remember that a green plant respires all the time, day and night. A green plant photosynthesizes only in the presence of sunlight.   All parts of the plant respire, the leaves, the stem, the roots and even the flowers. The parts above the soil get their oxygen directly from the air through pores. The pores in the leaves are called stomata (singular: stoma). The pores in the branches of trees are called lenticels.     The drawing shows a leaf of a ficus plant. A small part of the underside of the leaf has been magnified to show the stomata. The average number of stomata per mm2 of leaf is around 300. The smallest number is found on Tradescantia leaves which have 14 per mm2 . The highest number of stomata is found on the leaves of the Spanish oak tree. Here there are around 1200 per mm2 . The roots of a plant also need oxygen which they obtain from the air spaces in the soil. If you give too much water to a plant in a pot you could kill the roots by drowning them! Plants, such as rice, which normally grow in wet soil often have air spaces in their roots. This is so that they can carry air from the atmosphere down to the root tips to be able to respire under water.   The Open Door Web Site is non-profit making. Your donations help towards the cost of maintaining this free service on-line. Donate to the Open Door Web Site using PayPal © The Open Door Team 2017 Any questions or problems regarding this site should be addressed to the webmaster © Paul Billiet and Shirley Burchill 2017
Stoma
Which sub-division of plants is named after their practice of forming 'naked seeds'?
Leaf Structure, Function, and Adaptation About Watch and Favorite Watch Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made. Favorite Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students. Leaf Structure, Function, and Adaptation Leaves have many structures that prevent water loss, transport compounds, aid in gas exchange, and protect the plant as a whole. Learning Objective Describe the internal structure and function of a leaf Key Points The epidermis consists of the upper and lower epidermis; it aids in the regulation of gas exchange via stomata. The epidermis is one layer thick, but may have more layers to prevent transpiration . The cuticle is located outside the epidermis and protects against water loss; trichomes discourage predation. The mesophyll is found between the upper and lower epidermis; it aids in gas exchange and photosynthesis via chloroplasts . The xylem transports water and minerals to the leaves; the phloem transports the photosynthetic products to the other parts of the plant. Plants in cold climates have needle-like leaves that are reduced in size; plants in hot climates have succulent leaves that help to conserve water. Terms Full Text Leaf Structure and Function The outermost layer of the leaf is the epidermis. It consists of the upper and lower epidermis, which are present on either side of the leaf. Botanists call the upper side the adaxial surface (or adaxis) and the lower side the abaxial surface (or abaxis). The epidermis aids in the regulation of gas exchange. It contains stomata, which are openings through which the exchange of gases takes place. Two guard cells surround each stoma , regulating its opening and closing. Guard cells are the only epidermal cells to contain chloroplasts. The epidermis is usually one cell layer thick. However, in plants that grow in very hot or very cold conditions, the epidermis may be several layers thick to protect against excessive water loss from transpiration. A waxy layer known as the cuticle covers the leaves of all plant species . The cuticle reduces the rate of water loss from the leaf surface. Other leaves may have small hairs (trichomes) on the leaf surface. Trichomes help to avert herbivory by restricting insect movements or by storing toxic or bad-tasting compounds. They can also reduce the rate of transpiration by blocking air flow across the leaf surface . Trichomes give leaves a fuzzy appearance as in this (a) sundew (Drosera sp.). Leaf trichomes include (b) branched trichomes on the leaf of Arabidopsis lyrata and (c) multibranched trichomes on a mature Quercus marilandica leaf. Below the epidermis of dicot leaves are layers of cells known as the mesophyll, or "middle leaf." The mesophyll of most leaves typically contains two arrangements of parenchyma cells: the palisade parenchyma and spongy parenchyma . The palisade parenchyma (also called the palisade mesophyll) aids in photosynthesis and has column-shaped, tightly-packed cells. It may be present in one, two, or three layers. Below the palisade parenchyma are loosely-arranged cells of an irregular shape. These are the cells of the spongy parenchyma (or spongy mesophyll). The air space found between the spongy parenchyma cells allows gaseous exchange between the leaf and the outside atmosphere through the stomata. In aquatic plants, the intercellular spaces in the spongy parenchyma help the leaf float. Both layers of the mesophyll contain many chloroplasts. (a) (top) The central mesophyll is sandwiched between an upper and lower epidermis. The mesophyll has two layers: an upper palisade layer and a lower spongy layer. Stomata on the leaf underside allow gas exchange. A waxy cuticle covers all aerial surfaces of land plants to minimize water loss. (b) (bottom) These leaf layers are clearly visible in the scanning electron micrograph. The numerous small bumps in the palisade parenchyma cells are chloroplasts. The bumps protruding from the lower surface of the leaf are glandular trichomes. Similar to the stem, the leaf contains vascular bundles composed of xylem and phloem . The xylem consists of tracheids and vessels, which transport water and minerals to the leaves. The phloem transports the photosynthetic products from the leaf to the other parts of the plant. A single vascular bundle, no matter how large or small, always contains both xylem and phloem tissues.
i don't know
What is the state of inactivity through the dry, summer season, as hibernation is the dormancy of the winter months?
Hibernation | Article about hibernation by The Free Dictionary Hibernation | Article about hibernation by The Free Dictionary http://encyclopedia2.thefreedictionary.com/hibernation Also found in: Dictionary , Thesaurus , Medical , Wikipedia . hibernation (hī'bərnā`shən) [Lat.,= wintering], practice, among certain animals, of spending part of the cold season in a more or less dormant state, apparently as protection from cold when normal body temperature cannot be maintained and food is scarce. Hibernating animals are able to store enough food in their bodies to carry them over until food is again obtainable. They do not grow during hibernation, and all body activities are reduced to a minimum: there may be as few as one or two heartbeats a minute. Cold-blooded animals (e.g., insects, reptiles, amphibians, and fish) must hibernate if they live in environments where the temperature—and hence their own body temperature—drops below freezing. Some insects pass their larval stage in a state of hibernation; in such cases hibernation is closely associated with the reproductive cycle (see larva larva, independent, immature animal that undergoes a profound change, or metamorphosis, to assume the typical adult form. Larvae occur in almost all of the animal phyla; because most are tiny or microscopic, they are rarely seen. They play diverse roles in the lives of animals. ..... Click the link for more information. ; pupa pupa , name for the third stage in the life of an insect that undergoes complete metamorphosis, i.e., develops from the egg through the larva and the pupa stages to the adult. ..... Click the link for more information. ). However, most warm-blooded animals, i.e., birds and mammals, can survive freezing environments because their metabolism controls their body temperatures. Many hibernating animals seek insulation from excessive cold; bears and bats retire to caves, and frogs and fish bury themselves in pond bottoms below the frost line. Analogous to hibernation is aestivation, a dormant period of escape from heat and drought. Other methods of avoiding excessively high or low temperatures and destructive increases or decreases in the water supply are encystment and ensuing dormancy, e.g., in plant seeds and bacteria, and migration. Some animals, such as rabbits, raccoons, and squirrels, store food against scarcity and spend cold periods asleep in their burrows, though they may emerge on warm days. Hibernation A term generally applied to a condition of dormancy and torpor found in cold-blooded (poikilotherm) vertebrates and invertebrates. (The term is also applied to relatively few species of mammals and birds, which are warm-blooded vertebrates.) This rather universal phenomenon can be readily seen when body temperatures of poikilotherm animals drop in a parallel relation to ambient environmental temperatures. Poikilotherm animals Hibernation occurs with exposure to low temperatures and, under normal conditions, occurs principally during winter seasons when there are lengthy periods of low environmental temperatures. A related form of dormancy is known as estivation. Many animals estivate when they are exposed to prolonged periods of drought or during hot, dry summers. For all practical purposes, hibernation and estivation in animals are indistinguishable, except for the nature of the stimulus, which is either cold or an arid environment. There is no complete list of animals that hibernate; however, many examples can be found among the poikilotherms, both vertebrate and invertebrate. The poikilotherms are sometimes referred to as ectothermic, because their body temperatures are not internally regulated but follow the rise and fall of environmental temperatures. During hibernation and winter torpor, body temperatures reflect the environmental temperature, often to within a fraction of a degree. Among the classic examples of hibernators or estivators are reptiles, amphibians, and fishes among the vertebrates, and insects, mollusks, and many other invertebrates. For many ectothermic vertebrates (fishes, amphibians, and reptiles) the ability to avoid seasonal and periodic environmental rigors by entering a state of metabolic inactivity is a crucial element in their survival. Specifically, winter dormancy and summer estivation—the usual context in which these terms are applied to ectotherms—permit these animals to survive and flourish, first, by reducing the impact of seasonal extremes and, second, by significantly lowering the ectotherm's energetic costs during times that would not be favorable for activity (that is, when food is available). Many terrestrial reptiles, such as lizards, snakes, and turtles, become dormant and hibernate by burrowing in crevices under rocks, logs, and in the ground below the frost line. Terraqueous turtles also become cold-torpid and may often be found completely submerged in mud and in ponds under ice. Since the hibernating reptile is subject to the caprices of duration of seasonal low temperature, there is no well-defined period of dormancy. The period of hibernation may often be related to latitudinal positions as evidenced by the turtle family Emydae. Species that inhabit the northern climes will hibernate longer than their southern relatives, thus showing hibernation periods which are proportional to the length of the winter period. Hibernating reptiles show a loss of appetite and discontinue the ingestion of food. Although the metabolic rate is reduced as much as 95% in hibernating turtles, there is some utilization of stored food products. There are two principal types of reserve food: lipids and glycogen, the animal starch, which is less stable and more rapidly used than fats. Glycogen is generally localized in tissues such as liver and muscle. There is evidence that these reserve foods are selectively utilized. In hibernating turtles, the tissue glycogen is used during the initial days and weeks of hibernation; later, the lipids are utilized. A major hazard to hibernating poikilotherms is death from freezing; ice crystals form in free protoplasmic water and ultimately destroy the cells and tissues, causing the death of the animal. Frogs, salamanders, and turtles are able to survive, despite the reduction in body temperatures to about 32 to 31°F (0 to -1°C). As winter approaches, the water content of the tissues becomes reduced and the blood more concentrated. Hibernation in fishes does not occur. Many fishes do, however, spend much of the winter in a state of quiescence while partially frozen in mud and ice. The phenomenon of estivation is best known in the dipnoans, that is, the lungfishes. These fishes are restricted to tropical regions marked by repeated seasons of drought. They survive the dry seasons by becoming dormant and torpid. The lungfishes are among the more primitive air-breathing animals possessing a lung which utilizes atmospheric oxygen. This lung becomes the primary organ of respiration during the torpidity of estivation. In general, the lungfishes follow a similar behavioral pattern as the dry seasons approach. Protopterus, for example, burrows in the bottom mud as the water begins to diminish during the dry season. A lifeline of air is provided by the tunnel from the burrow to the surface. In preparation for estivation, Protopterus secretes a slimy mucus around itself which hardens in a tight cocoonlike chamber, preventing the desiccation of the fish. There is but one opening, formed around the mouth. Thus the air from the tunnel enters the mouth and passes to the lung apparatus. At the termination of the dry season, water slowly enters the burrow, softens the contents, and awakens the lungfish. The metabolism of the lungfish is at a low ebb during estivation, with the energy for its modest life processes provided by the utilization of tissue protein. In some snails estivation may be extended for years at a time, and among the insects and spiders the period of hibernation becomes intimately associated with a phase in the life cycle. During the winter months and during a hot dry summer, the soil contains a remarkable variety of torpid invertebrates, for example, earthworms, snails and slugs, nematodes, insects and spiders, grubs, larvae, and pupae of many insects, egg cases, and cocoons. Insects overwinter, for the most part, in the egg or larval stage of metamorphosis. Hibernation frequently becomes integrated with the diapause, or arrested development, of the egg or larva which occurs during the winter. The familiar cocoon of the butterfly is the hibernaculum of the larva and pupa. See Insect physiology The phenomenon of encystment is commonplace in the protozoa, or single-celled animals. Encystment is remarkably similar to estivation and hibernation, and an encysted protozoon is extremely quiescent and almost nonmetabolizing. See Protozoa The hibernacula of poikilotherm vertebrates and invertebrates are as varied as the animals themselves (see illustration). The minute cysts in protozoa, the cocoon and egg case of insects and spiders, the burrows and crevices of reptiles, and the dried mucous case of the lungfish, in all instances, protect the animal from evaporation or desiccation and freezing. Hibernacula of various cold-blooded vertebrates Warm-blooded vertebrates Many mammals and some birds spend at least part of the winter in hiding, but remain no more drowsy than in normal sleep. On the other hand, some mammals undergo a profound decrease in metabolic rate and physiological function during the winter, with a body temperature near 32°F (0°C). This condition, sometimes known as deep hibernation, is the only state in which the warm-blooded vertebrate, with its complex mechanisms for temperature control, abandons its warm-blooded state and chills to the temperature of the environment. Between the drowsy condition and deep hibernation are gradations about which little is known. The bear, skunk, raccoon, and badger are animals which become drowsy in winter. Although usually considered the typical hibernator, the bear's body temperature does not drop more than a few degrees. The deep hibernators are confined to five orders of mammals: the marsupials, the Chiroptera or bats, the insectivores, the rodents, and, probably, the primates. Most, if not all, of the insect-eating bats of temperate climates not only hibernate in the winter, but also drop their body temperature when they roost and sleep. The advantage of this for a small mammal with a disproportionately large heat-losing surface is obvious when conservation of energy is considered. Many rodents are deep hibernators, including ground squirrels, woodchucks, dormice, and hamster. The fat-tailed and mouse lemurs are primates that hibernate or estivate. Among birds, the poorwill (Phalaenoptilus) and some hummingbirds and swifts undergo a lowering of body temperature and metabolic rate in cold periods. With all deep hibernators, except the bats, hibernation is seasonal, usually occurring during the cold winter months. In all cases, it occurs in animals which would face extremely difficult conditions if they had to remain active and search for food. During a preparation period for hibernation, the animals either become fat, like the woodchuck, or store food in their winter quarters, like the chipmunk and hamster. Prior to hibernation, there is a general involution of the endocrine glands, but at least part of this occurs soon after the breeding season and is not directly concerned with hibernation. Animals such as ground squirrels become more torpid during the fall, even when kept in a warm environment, indicating a profound metabolic change which may be controlled by the endocrine glands. In most hibernators lack of food has little if any effect, and the stimulus for hibernation is not known. It has been reported that an extract from the blood of an animal in hibernation will induce hibernation when infused into an active potential hibernator, indicating that the factor which produces hibernation may be bloodborne. Hibernation in mammals is not caused by an inability to remain warm when exposed to cold, for hibernators are capable of very high metabolic rates and sometimes do not enter hibernation if exposed to cold for months at a time. When the animal is entering hibernation, heart rate and oxygen consumption decline before body temperature, indicating that the animal is actively damping its heat-generating mechanisms. The autonomic nervous system is involved in this process. As normal hibernation deepens, the heart rate, blood pressure, metabolic rate, and body temperature slowly drop, but in some animals periodic bouts of shivering and increased oxygen consumption occur, elevating the body temperature temporarily and causing a stepwise entrance into hibernation. See Autonomic nervous system In deep hibernation at a steady state the body temperature is 33–35.5°F (0.5–2°C) above that of the environment, and it is a peculiarity of hibernators that the vital processes can function at lower temperatures than those of nonhibernators. The heart rate varies between 3 and 15 beats per minute. The metabolic rate is less than one-thirtieth of the warm-blooded rate at rest, and the main source of energy is fat. In spite of its low body temperature, the hibernating animal retains a remarkably rigid control of its internal environment. If the environmental temperature drops to 32°F (0°C), the hibernating animal may respond either by increasing its metabolic rate and remaining in hibernation or by a complete arousal from the hibernating state. A hibernating mammal reduces its metabolic rate by nearly 30-fold and shifts from glycogen to lipid (that is, fat stores) as the major fuel source for metabolism. The magnitude of metabolic rate reduction is far in excess of what would be expected solely as a result of a hibernator's lowered body temperature. Moreover, suppression of glycogen metabolism during hibernation must be poised for regular and rapid relaxation during periods of arousal (which are fueled by glycolysis) as well as at the end of the hibernation period. Mechanisms controlling these aspects of hibernation metabolism appear to be the relative acidification of the intracellular fluids of the hibernator. This is a consequence of the hibernator's tendency to continuously regulate its blood pH (at about pH 7.4, termed pH stat), and of the adoption of a modified breathing pattern that, although variable among species, is typified by periods of apnea lasting up to 2 h that are interspersed between 3–30 min intervals of rapid ventilation. The hibernator is capable of waking at any time, using self-generated heat, and this characteristic clearly separates the hibernating state from any condition of induced hypothermia. During the total period of hibernation, the hibernator spontaneously wakes from time to time, usually at least once a week. In the period of wakefulness the stored food is evidently eaten, but animals which do not store food rely on their fat for the extra energy during the whole winter. The cause of the periodic arousals has not been definitely determined, but it is theorized that the arousal is due to the effect of the accumulation of a metabolite or other substance which can be neutralized only in the warm-blooded state. As in hibernating endotherms (birds and mammals), a key factor regulating seasonal torpor in ectotherms is the continuous internal monitoring of environmental cues, such as day length, which in turn triggers temporally precise seasonally adaptive changes in systemic function, metabolism, and behavior. A second important factor is the presence in ectotherms of a bioenergetic metabolic system that, when compared to mammals and birds, operates at a much lower intensity and has less absolute dependence on molecular oxygen. The metabolic energy adaptations for seasonal torpor in ectothermic vertebrates are to a large extent similar to those required by vigorious activity or prolonged diving, and thus involve the processing or storage of intermediate metabolitics such as lactic acid, the regulation of intra- and extracellular pH, and enduring periods without access to oxygen. See Energy metabolism , Metabolism hibernation
Aestivation
What kind of a creature is a scorpion?
Christmas in Yellowstone | Animals That Hibernate | Nature | PBS Christmas in Yellowstone Explore more from this episode More A precise, fixed definition of hibernation is somewhat elusive. A common way of explaining hibernation is to call it a state of winter dormancy, a period of inactivity in which an animal will conserve energy by maintaining a lowered body temperature for most of the winter. Yet there are also creatures, such as the North American desert tortoise, that employ a comparable state of dormancy and body temperature regulation to protect themselves against extreme heat. This may be called aestivation. There is another term, however, brumation, that refers to a winter state of sluggishness in certain reptiles and amphibians, which don’t maintain a high body temperature. Body temperature, then, may be somewhat misleading as a telltale indicator of hibernation. Perhaps the best way to look at hibernation, given the circumstances, is as a set of seasonal adaptations that animals employ to survive regularly occurring periods of famine. Here is a closer look at the adaptations of five animals that face the challenges of a winter climate: Bats Northern bat species overwinter in caves where they can maintain a lowered body temperature that will allow them to pass the winter months, when their prey will disappear, without feeding. The choice of a cave is paramount: too warm, and the bats’ level of activity will rise, and they’ll starve; too cold and the bats will freeze or exhaust themselves, shivering to keep warm. To find the right cave, the bats must migrate — sometimes hundreds of miles. Bats from the north may fly south, while bats from the south may fly north. With experience, the bats learn which caves are best suited to winter survival — and they will return to a good cave year after year, thousands upon thousands of bats in a single cave. Bears Bears are often thought of as a very typical hibernating animal when, in fact, their winter behavior is quite extraordinary. In the late summer and early autumn, bears will begin to gorge on food, eating five times their usual diet to accumulate a five-inch layer of fat that will sustain them the entire winter. In the late autumn the bears will lose their appetites and seek out a potential den. Black bears make their dens in a hollow tree, under roots, beneath a pile of brush or branches. Grizzlies dig their dens into the side of a hill, covering the floor with branches and grass. Once in its dens den, a bear’s body temperature will never drop very far below normal and, though it becomes groggy, a bear is easily roused by outside disturbances. What’s so unusual is that, in spite of their nearly normal metabolism their metabolism is not nearly normal, but rather quite extraordinary, bears don’t need to drink or urinate all winter long. They survive on the thickness of their fat alone, while nitrogen, a potentially harmful metabolic waste product that humans excrete through urination, is actually recycled into protein — which helps maintain the bears’ muscle tissue, even as they are inactive all winter long! Meanwhile, the bears stay hydrated because water is not lost through urination. These and other physiological processes — such as how the bears keep their bone strength during these prolonged periods of inactivity — are currently being studied for their potential to help humans offset the negative effects of aging, osteoporosis, space travel, and the lethargy of modern day office work.   Chipmunks Chipmunks are a kind of ground squirrel known for their enormous, expanding cheeks, which, each fall, they will fill to capacity with load upon load of seeds. The seeds are carried back to the chipmunk’s underground system of hibernation burrows, where there is a special chamber reserved for foodstuffs. Chipmunks spend most of the winter underground in a state of dormancy, their body temperatures lowered to conserve energy. Yet the chipmunks won’t hibernate all winter. By March they will be awake again for mating season, in spite of the often still-deep snow. It is then that the stash of seeds will prove Deer Mice Deer mice range from the northern tree line in Alaska and Canada south to central Mexico, covering much of North America. They are a small species, 10 to 24 grams, roughly the size of house mice, yet they do not hibernate — somewhat surprising for a creature so tiny. Instead, they employ a series of adaptations to conserve energy. First, they build deep nests in tree holes, stumps, logs, or even cabins or other outbuildings. They also huddle together, as many as ten or more mice, to conserve heat. Deer mice are nocturnal, so during the day, they may go into torpor, a state of inactivity in which their body temperatures drop. Usually the torpor ends by late afternoon. Altogether, these three adaptations will allow a deer mouse to save up to 2.5 times the energy of a mouse that doesn’t employ these techniques. Wood Frogs Wood frogs can be found farther north than any other reptile or amphibian in North America, from the northeastern United States, across Canada, and into Alaska. In the winter, one of these frogs may be found underneath a pile of leaves, rock hard and cold — frozen, apparently. The frog isn’t dead, however, but engaged in a physiological process known as frost-tolerance in which as much as 65 percent of its total body water will freeze. The frog survives by not allowing ice to form in the cells themselves, which could slash and permanently damage the membranes. The frost-tolerance process is triggered by the onset of cold weather, when the frog’s fight-or-flight response releases adrenaline into the blood, which in turn, sets off a response in the frog’s liver that converts glycogen to glucose. This glucose is what prevents the actual cells from freezing while special proteins allow only the liquid within the frog to be concentrated together and frozen. Meanwhile, the frog’s breathing and heartbeat will stop — until the spring, when the frog will thaw out and resume breathing and pumping blood again. Although many amphibians survive the winter by digging down into the ground, below the frost line, three other North American frog species employ the wood frog’s method of frost-tolerance: the gray tree frog, spring peeper, and chorus frog. More from Christmas in Yellowstone (10)
i don't know
Which part of the common valerian is used to make a sedative?
Valerian — Health Professional Fact Sheet Disclaimer Key points This fact sheet provides an overview of the use of valerian for insomnia and other sleep disorders and contains the following key information: Valerian is an herb sold as a dietary supplement in the United States. Valerian is a common ingredient in products promoted as mild sedatives and sleep aids for nervous tension and insomnia. Evidence from clinical studies of the efficacy of valerian in treating sleep disorders such as insomnia is inconclusive. Constituents of valerian have been shown to have sedative effects in animals, but there is no scientific agreement on valerian's mechanisms of action . Although few adverse events have been reported, long-term safety data are not available. What is valerian? Valerian (Valeriana officinalis), a member of the Valerianaceae family, is a perennial plant native to Europe and Asia and naturalized in North America [ 1 ]. It has a distinctive odor that many find unpleasant [ 2 , 3 ]. Other names include setwall (English), Valerianae radix (Latin), Baldrianwurzel (German), and phu (Greek). The genus Valerian includes over 250 species , but V. officinalis is the species most often used in the United States and Europe and is the only species discussed in this fact sheet [ 3 , 4 ]. What are common valerian preparations? Preparations of valerian marketed as dietary supplements are made from its roots, rhizomes (underground stems), and stolons (horizontal stems). Dried roots are prepared as teas or tinctures , and dried plant materials and extracts are put into capsules or incorporated into tablets [ 5 ]. There is no scientific agreement as to the active constituents of valerian, and its activity may result from interactions among multiple constituents rather than any one compound or class of compounds [ 6 ]. The content of volatile oils , including valerenic acids ; the less volatile sesquiterpenes ; or the valepotriates ( esters of short-chain fatty acids ) is sometimes used to standardize valerian extracts. As with most herbal preparations, many other compounds are also present. Valerian is sometimes combined with other botanicals [ 5 ]. Because this fact sheet focuses on valerian as a single ingredient, only clinical studies evaluating valerian as a single agent are included. What are the historical uses of valerian? Valerian has been used as a medicinal herb since at least the time of ancient Greece and Rome. Its therapeutic uses were described by Hippocrates, and in the 2nd century, Galen prescribed valerian for insomnia [ 5 , 7 ]. In the 16th century, it was used to treat nervousness, trembling, headaches, and heart palpitations [ 8 ]. In the mid-19th century, valerian was considered a stimulant that caused some of the same complaints it is thought to treat and was generally held in low esteem as a medicinal herb [ 2 ]. During World War II, it was used in England to relieve the stress of air raids [ 9 ]. In addition to sleep disorders, valerian has been used for gastrointestinal spasms and distress , epileptic seizures, and attention deficit hyperactivity disorder . However, scientific evidence is not sufficient to support the use of valerian for these conditions [ 10 ]. What clinical studies have been done on valerian and sleep disorders? In a systematic review of the scientific literature , nine randomized , placebo-controlled , double-blind clinical trials of valerian and sleep disorders were identified and evaluated for evidence of efficacy of valerian as a treatment for insomnia [ 11 ]. Reviewers rated the studies with a standard scoring system to quantify the likelihood of bias inherent in the study design [ 12 ]. Although all nine trials had flaws, three earned the highest rating (5 on a scale of 1 to 5) and are described below. Unlike the six lower-rated studies, these three studies described the randomization procedure and blinding method that were used and reported rates of participant withdrawal . The first study used a repeated-measures design; 128 volunteers were given 400 mg of an aqueous extract of valerian, a commercial preparation containing 60 mg valerian and 30 mg hops, and a placebo [ 13 ]. Participants took each one of the three preparations three times in random order on nine nonconsecutive nights and filled out a questionnaire the morning after each treatment. Compared with the placebo, the valerian extract resulted in a statistically significant subjective improvement in time required to fall asleep (more or less difficult than usual), sleep quality (better or worse than usual), and number of nighttime awakenings (more or less than usual).This result was more pronounced in a subgroup of 61 participants who identified themselves as poor sleepers on a questionnaire administered at the beginning of the study. The commercial preparation did not produce a statistically significant improvement in these three measures. The clinical significance of the use of valerian for insomnia cannot be determined from the results of this study because having insomnia was not a requirement for participation. In addition, the study had a participant withdrawal rate of 22.9%, which may have influenced the results. In the second study, eight volunteers with mild insomnia (usually had problems falling asleep) were evaluated for the effect of valerian on sleep latency (defined as the first 5-minute period without movement) [ 14 ]. Results were based on nighttime motion measured by activity meters worn on the wrist and on responses to questionnaires about sleep quality, latency, depth, and morning sleepiness filled out the morning after each treatment. The test samples were 450 or 900 mg of an aqueous valerian extract and a placebo. Each volunteer was randomly assigned to receive one test sample each night, Monday through Thursday, for 3 weeks for a total of 12 nights of evaluation. The 450-mg test sample of valerian extract reduced average sleep latency from about 16 to 9 minutes, which is similar to the activity of prescription benzodiazepine medication (used as a sedative or tranquilizer ). No statistically significant shortening of sleep latency was seen with the 900-mg test sample. Evaluation of the questionnaires showed a statistically significant improvement in subjectively measured sleep. On a 9-point scale, participants rated sleep latency as 4.3 after the 450-mg test sample and 4.9 after the placebo. The 900-mg test sample increased the sleep improvement but participants noted an increase in sleepiness the next morning. Although statistically significant, this 7-minute reduction in sleep latency and the improvement in subjective sleep rating are probably not clinically significant. The small sample size makes it difficult to generalize the results to a broader population . The third study examined longer-term effects in 121 participants with documented nonorganic insomnia [ 15 ]. Participants received either 600 mg of a standardized commercial preparation of dried valerian root (LI 156, Sedonium? * ) or placebo for 28 days. Several assessment tools were used to evaluate the effectiveness and tolerance of the interventions, including questionnaires on therapeutic effect (given on days 14 and 28), change in sleep patterns (given on day 28), and changes in sleep quality and well-being (given on days 0 , 14, and 28). After 28 days, the group receiving the valerian extract showed a decrease in insomnia symptoms on all the assessment tools compared with the placebo group. The differences in improvement between valerian and placebo increased between the assessments done on days 14 and 28. * The mention of a specific brand name is not an endorsement of the product. The reviewers concluded that these nine studies are not sufficient for determining the effectiveness of valerian to treat sleep disorders [ 11 ]. For example, none of the studies checked the success of the blinding, none calculated the sample size necessary for seeing a statistical effect , only one partially controlled prebedtime variables [ 15 ], and only one validated outcome measures [ 13 ]. Two other randomized, controlled trials published after the systematic review described above [ 11 ] are presented below: In a randomized, double-blind study, 75 participants with documented nonorganic insomnia were randomly assigned to receive 600 mg of a standardized commercial valerian extract (LI 156) or 10 mg oxazepam (a benzodiazepine medication) for 28 days [ 16 ]. Assessment tools used to evaluate the effectiveness and tolerance of the interventions included validated sleep, mood scale, and anxiety questionnaires as well as sleep rating by a physician (on days 0, 14, and 28). Treatment result was determined via a 4-step rating scale at the end of the study (day 28). Both groups had the same improvement in sleep quality but the valerian group reported fewer side effects than did the oxazepam group. However, this study was designed to show superiority, if any, of valerian over oxazepam and its results cannot be used to show equivalence. In a randomized, double-blind, placebo-controlled crossover study, researchers evaluated sleep parameters with polysomnographic techniques that monitored sleep stages, sleep latency, and total sleep time to objectively measure sleep quality and stages [ 17 ]. Questionnaires were used for subjective measurement of sleep parameters. Sixteen participants with medically documented nonorganic insomnia were randomly assigned to receive either a single dose and a 14-day administration of 600 mg of a standardized commercial preparation of valerian (LI 156) or placebo. Valerian had no effect on any of the 15 objective or subjective measurements except for a decrease in slow-wave sleep onset (13.5 minutes) compared with placebo (21.3 minutes). During slow-wave sleep, arousability, skeletal muscle tone, heart rate, blood pressure, and respiratory frequency decreased. Increased time spent in slow-wave sleep may decrease insomnia symptoms. However, because all but 1 of the 15 endpoints showed no difference between placebo and valerian, the possibility that the single endpoint showing a difference was the result of chance must be considered. The valerian group reported fewer adverse events than did the placebo group. Although the results of some studies suggest that valerian may be useful for insomnia and other sleep disorders, results of other studies do not. Interpretation of these studies is complicated by the fact the studies had small sample sizes, used different amounts and sources of valerian, measured different outcomes, or did not consider potential bias resulting from high participant withdrawal rates. Overall, the evidence from these trials for the sleep-promoting effects of valerian is inconclusive. How does valerian work? Many chemical constituents of valerian have been identified, but it is not known which may be responsible for its sleep-promoting effects in animals and in in vitro studies. It is likely that there is no single active compound and that valerian's effects result from multiple constituents acting independently or synergistically [18, reviewed in 19]. Two categories of constituents have been proposed as the major source of valerian's sedative effects. The first category comprises the major constituents of its volatile oil including valerenic acid and its derivatives , which have demonstrated sedative properties in animal studies [ 6 , 20 ]. However, valerian extracts with very little of these components also have sedative properties, making it probable that other components are responsible for these effects or that multiple constituents contribute to them [ 21 ]. The second category comprises the iridoids , which include the valepotriates. Valepotriates and their derivatives are active as sedatives in vivo but are unstable and break down during storage or in an aqueous environment, making their activity difficult to assess [ 6 , 20 , 22 ]. A possible mechanism by which a valerian extract may cause sedation is by increasing the amount of gamma aminobutyric acid (GABA, an inhibitory neurotransmitter ) available in the synaptic cleft . Results from an in vitro study using synaptosomes suggest that a valerian extract may cause GABA to be released from brain nerve endings and then block GABA from being taken back into nerve cells [ 23 ]. In addition, valerenic acid inhibits an enzyme that destroys GABA [reviewed in 24]. Valerian extracts contain GABA in quantities sufficient to cause a sedative effect, but whether GABA can cross the blood-brain barrier to contribute to valerian's sedative effects is not known. Glutamine is present in aqueous but not in alcohol extracts and may cross the blood-brain barrier and be converted to GABA [ 25 ]. Levels of these constituents vary significantly among plants depending on when the plants are harvested, resulting in marked variability in the amounts found in valerian preparations [ 26 ]. What is the regulatory status of valerian in the United States? In the United States, valerian is sold as a dietary supplement, and dietary supplements are regulated as foods, not drugs. Therefore, premarket evaluation and approval by the Food and Drug Administration are not required unless claims are made for specific disease prevention or treatment. Because dietary supplements are not always tested for manufacturing consistency, the composition may vary considerably between manufacturing lots . Can valerian be harmful? Few adverse events attributable to valerian have been reported for clinical study participants. Headaches, dizziness, pruritus , and gastrointestinal disturbances are the most common effects reported in clinical trials but similar effects were also reported for the placebo [ 14-17 ]. In one study an increase in sleepiness was noted the morning after 900 mg of valerian was taken [ 14 ]. Investigators from another study concluded that 600 mg of valerian (LI 156) did not have a clinically significant effect on reaction time, alertness, and concentration the morning after ingestion [ 27 ]. Several case reports described adverse effects , but in one case where suicide was attempted with a massive overdose it is not possible to clearly attribute the symptoms to valerian [ 28-31 ]. Valepotriates, which are a component of valerian but are not necessarily present in commercial preparations, had cytotoxic activity in vitro but were not carcinogenic in animal studies [ 32-35 ]. Who should not take valerian? Women who are pregnant or nursing should not take valerian without medical advice because the possible risks to the fetus or infant have not been evaluated [ 36 ]. Children younger than 3 years old should not take valerian because the possible risks to children of this age have not been evaluated [ 36 ]. Individuals taking valerian should be aware of the theoretical possibility of additive sedative effects from alcohol or sedative drugs, such as barbiturates and benzodiazepines [ 10 , 37 , 38 ]. Does valerian interact with any drugs or supplements or affect laboratory tests? Valerian might have additive therapeutic and adverse effects if taken with sedatives, other medications, or certain herbs and dietary supplements with sedative properties [ 39 ]. These include the following: Benzodiazepines such as Xanax®, Valium®, Ativan®, and Halcion®. Barbiturates or central nervous system (CNS) depressants such as phenobarbital (Luminal®), morphine, and propofol (Diprivan®). Dietary supplements such as St. John’s wort, kava, and melatonin. Individuals taking these medications or supplements should discuss the use of valerian with their healthcare providers. Although valerian has not been reported to influence laboratory tests , this has not been rigorously studied [ 5 , 36 , 39 ]. What are some additional sources of scientific information on valerian? Medical libraries are a source of information about medicinal herbs. Other sources include Web-based resources such as PubMed . For general information on botanicals and their use as dietary supplements, please see Background Information About Botanical Dietary Supplements and Background Information About Dietary Supplements from the Office of Dietary Supplements (ODS). References Wichtl M, ed.: Valerianae radix. In: Bisset NG, trans-ed. Herbal Drugs and Phytopharmaceuticals: A Handbook for Practice on a Scientific Basis. Boca Raton, FL: CRC Press, 1994: 513-516. Pereira J: Valeriana officinalis: common valerian. In: Carson J, ed. The Elements of Materia Medica and Therapeutics. 3rd ed. Philadelphia: Blanchard and Lea, 1854: 609-616. Schulz V, Hansel R, Tyler VE: Valerian. In: Rational Phytotherapy. 3rd ed. Berlin: Springer, 1998: 73-81. Davidson JRT, Connor KM: Valerian. In: Herbs for the Mind: Depression, Stress, Memory Loss, and Insomnia. New York: Guilford Press, 2000: 214-233. Blumenthal M, Goldberg A, Brinckmann J, eds.: Valerian root. In: Herbal Medicine: Expanded Commission E Monographs. Newton, MA: Integrative Medicine Communications, 2000: 394-400. Hendriks H, Bos R, Allersma DP, Malingre M, Koster AS: Pharmacological screening of valerenal and some other components of essential oil of Valeriana officinalis. Planta Medica 42: 62-68, 1981 [ PubMed abstract ] Turner W: Of Valerianae. In: Chapman GTL, McCombie F, Wesencraft A, eds. A New Herbal, Parts II and III. Cambridge: Cambridge University Press, 1995: 464-466, 499-500, 764-765. [Republication of parts II and III of A New Herbal, by William Turner, originally published in 1562 and 1568, respectively.] Culpeper N: Garden valerian. In: Culpeper's Complete Herbal. New York: W. Foulsham, 1994: 295-297. [Republication of The English Physitian, by Nicholas Culpeper, originally published in 1652.] Grieve M: Valerian. In: A Modern Herbal. New York: Hafner Press, 1974: 824-830. Jellin JM, Gregory P, Batz F, et al.: Valerian In: Pharmacist’s Letter/Prescriber’s Letter Natural Medicines Comprehensive Database. 3rd ed. Stockton, CA: Therapeutic Research Faculty, 2000: 1052-1054. Stevinson C, Ernst E: Valerian for insomnia: a systematic review of randomized clinical trials. Sleep Medicine 1: 91-99, 2000. [ PubMed abstract ] Jadad AR, Moore RA, Carroll D, et al.: Assessing the quality of reports of randomized clinical trials: is blinding necessary? Controlled Clinical Trials 17: 1-12, 1996. [ PubMed abstract ] Leathwood PD, Chauffard F, Heck E, Munoz-Box R: Aqueous extract of valerian root (Valeriana officinalis L.) improves sleep quality in man. Pharmacology, Biochemistry and Behavior 17: 65-71, 1982. [ PubMed abstract ] Leathwood PD, Chauffard F: Aqueous extract of valerian reduces latency to fall asleep in man. Planta Medica 2: 144-148, 1985. [ PubMed abstract ] Vorbach EU, Gortelmeyer R, Bruning J: Treatment of insomnia: effectiveness and tolerance of a valerian extract [in German]. Psychopharmakotherapie 3: 109-115, 1996. Dorn M: Valerian versus oxazepam: efficacy and tolerability in nonorganic and nonpsychiatric insomniacs: a randomized, double-blind, clinical comparative study [in German]. Forschende Komplementärmedizin und Klassische Naturheilkunde 7: 79-84, 2000. [ PubMed abstract ] Donath F, Quispe S, Diefenbach K, Maurer A, Fietze I, Roots I: Critical evaluation of the effect of valerian extract on sleep structure and sleep quality. Pharmacopsychiatry 33: 47-53, 2000. [ PubMed abstract ] Russo EB: Valerian. In: Handbook of Psychotropic Herbs: A Scientific Analysis of Herbal Remedies in Psychiatric Conditions. Binghamton, NY: Haworth Press, 2001: 95-106. Houghton PJ: The scientific basis for the reputed activity of valerian. Journal of Pharmacy and Pharmacology 51: 505-512, 1999. Hendriks H, Bos R, Woerdenbag HJ, Koster AS. Central nervous depressant activity of valerenic acid in the mouse. Planta Medica 1: 28-31, 1985. [ PubMed abstract ] Krieglstein VJ, Grusla D. Central depressing components in Valerian: Valeportriates, valeric acid, valerone, and essential oil are inactive, however [in German]. Deutsche Apotheker Zeitung 128:2041-2046, 1988. Bos R, Woerdenbag HJ, Hendriks H, et al.: Analytical aspects of phytotherapeutic valerian preparations. Phytochemical Analysis 7: 143-151, 1996. Santos MS, Ferreira F, Cunha AP, Carvalho AP, Macedo T: An aqueous extract of valerian influences the transport of GABA in synaptosomes. Planta Medica 60: 278-279, 1994. [ PubMed abstract ] Morazzoni P, Bombardelli E: Valeriana officinalis: traditional use and recent evaluation of activity. Fitoterapia 66: 99-112, 1995. Cavadas C, Araujo I, Cotrim MD, et al.: In vitro study on the interaction of Valeriana officinalis L. extracts and their amino acids on GABAA receptor in rat brain. Arzneimittel-Forschung Drug Research 45: 753-755, 1995. [ PubMed abstract ] Bos R, Woerdenbag HJ, van Putten FMS, Hendriks H, Scheffer JJC: Seasonal variation of the essential oil, valerenic acid and derivatives, and valepotriates in Valeriana officinalis roots and rhizomes, and the selection of plants suitable for phytomedicines. Planta Medica 64:143-147, 1998. [ PubMed abstract ] Kuhlmann J, Berger W, Podzuweit H, Schmidt U: The influence of valerian treatment on “reaction time, alertness and concentration” in volunteers. Pharmacopsychiatry 32: 235-241, 1999. [ PubMed abstract ] MacGregor FB, Abernethy VE, Dahabra S, Cobden I, Hayes PC: Hepatotoxicity of herbal remedies. British Medical Journal 299: 1156-1157, 1989. [ PubMed abstract ] Mullins ME, Horowitz BZ: The case of the salad shooters: intravenous injection of wild lettuce extract. Veterinary and Human Toxicology 40: 290-291, 1998. [ PubMed abstract ] Garges HP, Varia I, Doraiswamy PM: Cardiac complications and delirium associated with valerian root withdrawal. Journal of the American Medical Association 280: 1566-1567, 1998. [ PubMed abstract ] Willey LB, Mady SP, Cobaugh DJ, Wax PM: Valerian overdose: a case report. Veterinary and Human Toxicology 37: 364-365, 1995. [ PubMed abstract ] Bounthanh, C, Bergmann C, Beck JP, Haag-Berrurier M, Anton R. Valepotriates, a new class of cytotoxic and antitumor agents. Planta Medica 41: 21-28, 1981. [ PubMed abstract ] Bounthanh, C, Richert L, Beck JP, Haag-Berrurier M, Anton R: The action of valepotriates on the synthesis of DNA and proteins of cultured hepatoma cells. Journal of Medicinal Plant Research 49: 138-142, 1983. [ PubMed abstract ] Tufik S, Fuhita K, Seabra ML, Lobo LL: Effects of a prolonged administration of valepotriates in rats on the mothers and their offspring. Journal of Ethnopharmacology 41: 39-44, 1996. [ PubMed abstract ] Bos R, Hendriks H, Scheffer JJC, Woerdenbag HJ: Cytotoxic potential of valerian constituents and valerian tinctures. Phytomedicine 5: 219-225, 1998. European Scientific Cooperative on Phytotherapy: Valerianae radix: valerian root. In: Monographs on the Medicinal Uses of Plant Drugs. Exeter, UK: ESCOP, 1997: 1-10. Rotblatt M, Ziment I. Valerian (Valeriana officinalis). In: Evidence-Based Herbal Medicine. Philadelphia: Hanley & Belfus, Inc., 2002: 355-359. Givens M, Cupp MJ: Valerian. In: Cupps MJ, ed. Toxicology and Clinical Pharmacology of Herbal Products. Totowa, NJ: Humana Press, 2000: 53-66. . Valerian. 2013. Disclaimer This fact sheet by the Office of Dietary Supplements provides information that should not take the place of medical advice. We encourage you to talk to your healthcare providers (doctor, registered dietitian, pharmacist, etc.) about your interest in, questions about, or use of dietary supplements and what may be best for your overall health. Any mention in this publication of a specific brand name is not an endorsement of the product. Updated: March 15, 2013
Root
What is the name of the structures which allow stems to breathe?
Valerian - American Family Physician Valerian SUSAN HADLEY, M.D., Middlesex Hospital, Middletown, Connecticut JUDITH J. PETRY, M.D., Vermont Healing Tools Project, Brattleboro, Vermont Am Fam Physician. 2003 Apr 15;67(8):1755-1758. References Valerian is a traditional herbal sleep remedy that has been studied with a variety of methodologic designs using multiple dosages and preparations. Research has focused on subjective evaluations of sleep patterns, particularly sleep latency, and study populations have primarily consisted of self-described poor sleepers. Valerian improves subjective experiences of sleep when taken nightly over one- to two-week periods, and it appears to be a safe sedative/hypnotic choice in patients with mild to moderate insomnia. The evidence for single-dose effect is contradictory. Valerian is also used in patients with mild anxiety, but the data supporting this indication are limited. Although the adverse effect profile and tolerability of this herb are excellent, long-term safety studies are lacking. The root of valerian, a perennial herb native to North America, Asia, and Europe, is used most commonly for its sedative and hypnotic properties in patients with insomnia, and less commonly as an anxiolytic. Multiple preparations are available, and the herb is commonly combined with other herbal medications. This review addresses only studies that used valerian root as an isolated herb. As with most herbal products available in the United States, valerian root extracts are not regulated for quality or consistency. Independent testing laboratories (such aswww.consumerlab.com) generally use valeric acid content as a marker for pharmacologic activity and represent one source for reliable information to support product choice. Pharmacology References The chemical composition of valerian includes sesquiterpenes of the volatile oil (including valeric acid), iridoids (valepotriates), alkaloids, furanofuran lignans, and free amino acids such as γ-aminobutyric acid (GABA), tyrosine, arginine, and glutamine. Although the sesquiterpene components of the volatile oil are believed to be responsible for most of valerian's biologic effects, it is likely that all of the active constituents of valerian act in a synergistic manner to produce a clinical response. 1 Research into physiologic activity of individual components has demonstrated direct sedative effects (valepotriates, valeric acid) and interaction with neurotransmitters such as GABA (valeric acid and unknown fractions). 2 , 3 Uses and Efficacy References SEDATIVE/HYPNOTIC Several clinical studies have shown that valerian is effective in the treatment of insomnia, most often by reducing sleep latency. A double-blind, placebo-controlled trial 4 compared a 400-mg aqueous extract of valerian and a commercial valerian/hops preparation with placebo of encapsulated brown sugar. A total of 128 volunteers completed a subjective study 4 evaluating the effects of single doses of each test compound taken in random order on sleep latency, sleep quality, sleepiness on awakening, night awakenings, and dream recall. Valerian extract demonstrated statistically significant improvement over placebo in sleep latency and sleep quality. There was no difference between valerian extract and placebo in the other two parameters. The commercial valerian/hops preparation resulted in no changes in sleep latency, sleep quality, or night awakenings, and an increase in sleepiness on awakening. No information on the preparation of the commercial product was available, so the reasons for the lack of effect are unknown. Examination of the study subgroups showed that the positive effects of valerian extract on sleep were most significant in older male patients who considered themselves to be poor sleepers, female poor sleepers, younger poor sleepers, smokers, and those who habitually have lengthy sleep latencies. Subjects who rated themselves as habitually good sleepers were largely unaffected by the valerian extract. 4 In a double-blind study, 5 eight subjects who described themselves as having lengthy sleep latency wore a wrist activity meter and provided subjective sleep ratings in a study of the effects of valerian. Participants received either a 450- or 900-mg dose of an aqueous extract of valerian root or placebo. Single-dose (450 and 900 mg) valerian extract resulted in significant decreases in measured and subjective sleep latency and more stable sleep during the first quarter of the night, with no effect on total sleep time. The 900-mg dose produced increased sleepiness on awakening compared with placebo. A randomized, placebo-controlled, double-blind, cross-over study 6 involving 16 patients with insomnia confirmed by polysomnography demonstrated no effects on sleep efficiency after a single 600-mg dose of the valerian extract Sedonium, while multiple doses over 14 days resulted in significant improvement in parameters of slow-wave sleep measured by polysomnography. There was a nonsignificant trend toward reduced subjective sleep latency after the long-term valerian treatment. 6 Several studies have shown valerian's efficacy in patients who do not have sleep disturbances. A small study 7 of 10 patients at home and eight patients at a sleep laboratory who received two different dosages (450 and 900 mg) of an aqueous extract of valerian root demonstrated that both groups experienced a greater than 50 percent improvement in sleep latency and wake time after sleep onset. The efficacy results were based on questionnaires, self-rating scales, and nighttime motor activity. Electroencephalographic recordings in the laboratory section of the study showed no differences in efficacy between valerian and placebo, and data indicated a dose-dependent mild hypnotic effect of the valerian extract. 7 A recent systematic review 8 of randomized trials of the effect of valerian on patients with insomnia included reports in all languages. 8 [Evidence level B, systematic review of studies other than randomized controlled trials (RCTs)] The authors found nine randomized, double-blind, placebo-controlled trials that met the inclusion criteria. Two studies 9 , 10 showed improvement in sleep-related parameters in patients with insomnia who received repeated administration over two to four weeks. Another study 11 demonstrated effects after days 1 and 8 in slow-wave sleep, but no effect on subjective measures of sleep. Results were contradictory in six acute-dose studies. 4 , 5 , 7 , 12 , 13 The authors pointed out the wide variety of methodologies used in the studies, and the lack of attention to factors such as randomization, blinding, compliance, withdrawal, confounding variables, diagnostic criteria, and statistical analysis. They concluded that evidence for valerian in the treatment of insomnia is inconclusive, and that more rigorous trials are necessary. A recent multicenter 14 (RCT) compared a 600-mg dose of the valerian extract Sedonium with 10 mg of oxazepam over a six-week period in 202 patients who were diagnosed with non-organic insomnia. The two agents were equally effective in increasing sleep quality as measured by the Sleep Questionnaire B (SF-B), and these results were confirmed by subscales of the SF-B, the Clinical Global Impression Scale, and the Global Assessment of Efficacy. Mild to moderate adverse events occurred in 28.4 percent of patients receiving the valerian extract and 36.0 percent of patients taking oxazepam. Anxiolytic References Traditional herbalists have used valerian as an anxiolytic, frequently in combination with other herbal preparations such as passion flower and St. John's wort. There is a minimal amount of scientific data confirming this indication for valerian. One randomized, double-blind, placebo-controlled trial 15 compared valerian (100 mg) with propranolol (20 mg), a valerian-propranolol combination, and placebo in an experimental stress situation in 48 healthy subjects. Unlike propranolol, valerian had no effect on physiologic arousal but significantly decreased subjective feelings of somatic arousal. In a recent preliminary, randomized, double-blind, placebo-controlled trial, 16 36 patients with a diagnosis of generalized anxiety disorder were treated with placebo, diazepam in a dosage of 2.5 mg three times daily, or valerian extract in a dosage of 50 mg three times daily (80 percent dihydrovaltrate, 15 percent valtrate, and 5 percent acevaltrate; BYK-Gulden, Lomberg, Germany) for four weeks. Dosage was regulated at one week if an interviewing psychiatrist deemed an increase or decrease necessary. Although the study was limited by a small number of patients in each group, relatively low dosages of the active agents, and a short duration of treatment, the authors found a significant reduction in the psychic factor of the Hamilton Anxiety Scale (HAMA) with diazepam and valerian. Another RCT 17 compared 120 mg of kava (LI 150), 600 mg of valerian (LI 156), and placebo taken daily for seven days in relieving physiologic measures of stress induced under laboratory conditions in 54 healthy volunteers. Valerian and kava, but not placebo, significantly decreased systolic blood pressure responsivity, heart rate reaction, and self-reported stress. (NOTE: “LI 156” is an identification number referring to the specific herb and the manufacturer; in this case, Lichtwer Pharma UK, Ltd.) Contraindications, Adverse Effects, Interactions References Valerian is listed by the U.S. Food and Drug Administration as a food supplement and is, therefore, not subject to regulatory control beyond labeling requirements. According to Commission E monographs, 18 there are no contraindications to valerian. Reported adverse effects of valerian are rare. In a 14-day, multiple-dose study 6 of 16 patients, there were only two adverse events (migraine and gastrointestinal effects) in patients receiving valerian compared with 18 events in patients receiving placebo. A randomized, controlled, double-blind study 19 of 102 subjects evaluated reaction time, alertness, and concentration the morning after using valerian root extract (600 mg, LI 156) and found no negative effect in single- or repeated-dose administrations of valerian. Only one adverse effect (dizziness) was attributed to the valerian extract. No evidence of potentiation of valerian effects by concomitant ingestion of alcohol has been found in animal and human studies, but the combination should still be avoided. 1 , 20 Valerian may potentiate the sedative effects of barbiturates, anesthetics, and other central nervous system depressants. 21 One case report 22 suggests that sudden cessation of long-term high dose valerian therapy (530 mg to 2 g, five times daily) may result in withdrawal symptoms similar to those occurring with benzodiazepine use. Perhaps because of the poorly defined effects of valerian on GABA neurotransmission, valerian appears to attenuate benzodiazepine withdrawal symptoms in animals and humans. 23 , 24 Dosage References Valerian is a safe herbal choice for the treatment of mild insomnia and has good tolerability. Most studies suggest that it is more effective when used continuously rather than as an acute sleep aid; more rigorous studies are needed to confirm these results. A potential advantage of valerian over benzodiazepines is the lack of sleepiness on awakening when used at the recommended dosages. Valerian also may be helpful in weaning patients with insomnia from benzodiazepines. The use of valerian as an anxiolytic requires further study. Long-term safety studies are lacking. Table 1 discusses the efficacy, safety, tolerability, and cost of valerian.
i don't know
Which acid is contained in rhubarb leaves, making them poisonous to eat?
Are Rhubarb Leaves Poisonous? Make a Natural Insecticide with the Leaves. Many visitors ask, " Can rhubarb leaves be composted?  " It would stand to reason that if the leaves are poisonous, then adding them to compost would be a concern. However, since the oxalic acid is broken down, diluted and pH balanced quite quickly, this is not a concern. Since humans and animals do not normally ingest matter from a compost, rhubarb leaves should be able to be added safely to the compost. Go here for more information and tips for composting rhubarb leaves. Interestingly, the leaves of rhubarb can be used to make a natural insecticide. If you have a large rhubarb patch, you may be interested in making this natural insecticide using the leaves after picking your rhubarb.  A Recipe for Natural Insecticide Using Rhubarb Leaves Boil 500 grams of rhubarb leaves in a few pints of water for about 20 minutes. Allow leaf mixture to cool. Strain the liquid into a CHILD PROOF/SAFE suitable container. Add a tiny bit of dish detergent or soap flakes, (not laundry detergent). Using a spray bottle, spray on leaves to kill off bugs such as aphids and spider mites, June bugs, and fungus diseases. *NOTE - DO NOT spray this product on ANYTHING edible. Rhubarb leaves contain high amounts of oxalic acid, and are poisonous, and could cause death.   If you are interested in more recipes to make simple homemade Natural Pesticides and Insecticides, Go here Here are additional pages within this website that provide helpful information about growing rhubarb in the home garden (or use the website's navigation bars):
Oxalic acid
What kind of an organism causes a 'rust' attack on plants?
Chop and Drop Rhubarb Mulch: Rhubarb leaves are poisonous. (National Gardening Association) NEILMUIR1 Aug 15, 2013 10:39 PM CST All parts of Rhubarb are toxic with oxalic acid, however it is the green leaves that should not ever be eaten. The leaves make a wonderful organic insecticide that wipes out aphids as it suffocates them. Please see here. http://www.wikihow.com/Make-Rhubarb-Garden-Spray The poison in Rhubarb leaves is: "Oxalates are contained in all parts of rhubarb plants, especially in the green leaves. There is some evidence that anthraquinone glycosides are also present and may be partly responsible. It is not clear as to the exact source of poisoning from rhubarb, possibly a result of both compounds. The stalks contain low levels of oxalates, so this does not cause problems." However I Love rhubarb to eat normally, especially with a sprinkle of ginger and nutmeg. Nutmeg is known to reduce any oxalic acid in spinach and rhubarb. Don't stop eating the stuff as it is wonderful and the leaves are used by some gardeners to put under potato's as they claim it stops potato blight. That has yet to be proved but they swear by it. Kindest regards from across the Atlantic. Neil.
i don't know
Which is the dominant generation in the ferns?
moss & fern notes b1 The haploid gametophyte stage contains half the chromosome number & produces gametes (egg & sperm)  Gametophyte stage is dominant in the moss's life cycle Gametophytes are photosynthetic & have root-like rhizoids The diploid sporophyte has a complete set of chromosomes & produces spores by meiosis Sporophyte of a moss is smaller than, & attached to the Gametophyte Sporophytes lack chlorophyll & depend on the photosynthetic gametophyte for food Sporophyte has a long, slender stalk topped with a capsule Capsule forms haploid (n) spores  Moss Capsules Sexual Reproduction in Moss: Mosses produce 2 kinds of gametes (egg & sperm) Gametes of Bryophytes are surrounded by a jacket of sterile cells that keep the cells from drying out Female gametes or eggs are larger with more cytoplasm & are immobile Flagellated sperm must swim to the egg through water droplets for fertilization Moss gametes form in separate reproductive structures on the Gametophyte --- Archegonium & Antheridium Archegonium Each Archegonium forms one egg, but each Antheridium forms many sperm Fertilization can occur only after rain when the Gametophyte is covered with water Sperms swim to the egg by following a chemical trail released by the egg  A zygote (fertilized egg) forms that undergoes mitosis and becomes a Sporophyte Cells inside mature Sporophyte capsule undergoes meiosis and form haploid spores Haploid spores germinate into juvenile plants called protonema Protonema begin the Gametophyte generation Protonema Spores are carried by wind & sprout on moist soil forming a new Gametophyte Asexual reproduction in Mosses: Asexual reproduction in moss may occur by fragmentation or gemmae Pieces of a Gametophyte can break off & form new moss plants (fragmentation) Gemmae are tiny, cup shaped structures on the Gametophytes  Raindrops separate gemmae from the parent plant so they can spread & form new Gametophytes  Gemmae cups Serve as pioneer plants on bare rock or ground Help prevent erosion Alternates between dominant Sporophyte stage & Gametophyte stage Sporophyte stage has true roots, stems, & leaves Produce spores on the underside of leaves  Leaves are called fronds & are attached by a stem-like petiole FERNS Spores produced on underside of fronds in clusters of sporangia called sori Spores undergo meiosis, are spread by wind, & germinate on moist soil to form prothallus Prothallus begins the Gametophyte stage Mature Gametophytes are small, heart-shaped structures that live only a short time Male antheridia & female archegonia grow on the prothalli Sperm must swim to the egg to fertilize it & developing embryo becomes the Sporophyte generation Newly forming fronds are called fiddleheads & uncurl Uses for Ferns:
Sporophyte
What is the name of the lustrous substance that forms pearl and mother-of-pearl?
Ferns and Other Seedless Vascular Plants About Watch and Favorite Watch Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made. Favorite Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students. Ferns and Other Seedless Vascular Plants Ferns, club mosses, horsetails, and whisk ferns are seedless vascular plants that reproduce with spores and are found in moist environments. Learning Objective Identify types of seedless vascular plants Key Points Club mosses, which are the earliest form of seedless vascular plants , are lycophytes that contain a stem and microphylls. Horsetails are often found in marshes and are characterized by jointed stems with whorled leaves. Photosynthesis occurs in the stems of whisk ferns, which lack roots and leaves. Most ferns have branching roots and form large compound leaves, or fronds, that perform photosynthesis and carry the reproductive organs of the plant. Terms Full Text Ferns and Other Seedless Vascular Plants Water is required for fertilization of seedless vascular plants; most favor a moist environment. Modern-day seedless tracheophytes include club mosses, horsetails, ferns, and whisk ferns. Phylum Lycopodiophyta: Club Mosses The club mosses, or phylum Lycopodiophyta, are the earliest group of seedless vascular plants. They dominated the landscape of the Carboniferous, growing into tall trees and forming large swamp forests. Today's club mosses are diminutive, evergreen plants consisting of a stem (which may be branched) and microphylls (leaves with singular unbranched veins). The phylum Lycopodiophyta consists of close to 1,200 species , including the quillworts (Isoetales), the club mosses (Lycopodiales), and spike mosses (Selaginellales), none of which are true mosses or bryophytes . Lycophytes follow the pattern of alternation of generations seen in the bryophytes, except that the sporophyte is the major stage of the life cycle. The gametophytes do not depend on the sporophyte for nutrients . Some gametophytes develop underground and form mycorrhizal associations with fungi. In club mosses, the sporophyte gives rise to sporophylls arranged in strobili, cone-like structures that give the class its name . Lycophytes can be homosporous or heterosporous . In the club mosses such as Lycopodium clavatum, sporangia are arranged in clusters called strobili. Phylum Monilophyta: Class Equisetopsida (Horsetails) Horsetails, whisk ferns, and ferns belong to the phylum Monilophyta, with horsetails placed in the Class Equisetopsida. The single genus Equisetum is the survivor of a large group of plants known as Arthrophyta, which produced large trees and entire swamp forests in the Carboniferous. The plants are usually found in damp environments and marshes. The stem of a horsetail is characterized by the presence of joints or nodes , hence the name Arthrophyta (arthro- = "joint"; -phyta = "plant"). Leaves and branches come out as whorls from the evenly-spaced joints. The needle-shaped leaves do not contribute greatly to photosynthesis, the majority of which takes place in the green stem . Silica collects in the epidermal cells , contributing to the stiffness of horsetail plants. Underground stems known as rhizomes anchor the plants to the ground. Modern-day horsetails are homosporous and produce bisexual gametophytes. Phylum Monilophyta: Class Psilotopsida (Whisk Ferns) While most ferns form large leaves and branching roots, the whisk ferns, Class Psilotopsida, lack both roots and leaves, which were probably lost by reduction . Photosynthesis takes place in their green stems; small yellow knobs form at the tip of the branch stem and contain the sporangia . Whisk ferns were considered an early pterophytes. However, recent comparative DNA analysis suggests that this group may have lost both vascular tissue and roots through evolution and is more closely related to ferns. Phylum Monilophyta: Class Polypodiopsida (Ferns) With their large fronds, ferns are the most-readily recognizable seedless vascular plants. They are considered the most-advanced seedless vascular plants and display characteristics commonly observed in seed plants. More than 20,000 species of ferns live in environments ranging from tropics to temperate forests . Although some species survive in dry environments, most ferns are restricted to moist, shaded places. Ferns made their appearance in the fossil record during the Devonian period and expanded during the Carboniferous. The dominant stage of the life cycle of a fern is the sporophyte, which consists of large compound leaves called fronds. Fronds fulfill a double role; they are photosynthetic organs that also carry reproductive organs. The stem may be buried underground as a rhizome from which adventitious roots grow to absorb water and nutrients from the soil; or they may grow above ground as a trunk in tree ferns. Adventitious organs are those that grow in unusual places, such as roots growing from the side of a stem. Most ferns produce the same type of spores and are, therefore, homosporous. The diploid sporophyte is the most conspicuous stage of the life cycle. On the underside of its mature fronds, sori (singular, sorus) form as small clusters where sporangia develop . Inside the sori, spores are produced by meiosis and released into the air. Those that land on a suitable substrate germinate and form a heart-shaped gametophyte, which is attached to the ground by thin filamentous rhizoids . The inconspicuous gametophyte harbors both sex gametangia . Flagellated sperm are released and swim on a wet surface to where the egg is fertilized. The newly-formed zygote grows into a sporophyte that emerges from the gametophyte, growing by mitosis into the next generation sporophyte.
i don't know
What is the name of the so-called 'first-bird'?
Archaeopteryx - CreationWiki, the encyclopedia of creation science Archaeopteryx Binomial Name Archaeopteryx lithographica Archaeopteryx is an extinct bird that evolutionists argue possesses some reptilian-like features causing it to be classified as a evolutionary transitional form , and is considered the first of the so called feathered dinosaurs . It has been associated, geologically with the late Jurassic and dated by radiometric dating methods at 150 million years. According to the U.S. National Park Service (Dinosaur National Monument): “ Fossils of Archaeopteryx, a little animal that lived in the middle of dinosaur times, do show traces of feathers, so it has often been called the first bird. But the skeleton of Archaeopteryx looks almost exactly like that of a small meat-eating dinosaur, right down to its tiny sharp teeth. So what was it- -a bird or a dinosaur? Some scientists think that Archaeopteryx was both: a warm-blooded, feathered dinosaur that became the ancestor of the birds. [1] ” 6 See Also Morphology Archaeopteryx was a fully flying and perching bird (though it has an unfused spine, no bill, a reptilian skull, adult teeth, no reptilian snout and bony tail, features seen in no modern bird). Jonathan Sarfati speaks to its bird morphology. “ Archaeopteryx had fully-formed flying feathers (including asymmetric vanes and ventral, reinforcing furrows as in modern flying birds), the classical elliptical wings of modern woodland birds, and a large wishbone for attachment of muscles responsible for the down stroke of the wings. Its brain was essentially that of a flying bird, with a large cerebellum and visual cortex. The fact that it had teeth is irrelevant to its alleged transitional status—a number of extinct birds had teeth, while many reptiles do not. Furthermore, like other birds, both its maxilla (upper jaw) and mandible (lower jaw) moved. In most vertebrates, including reptiles, only the mandible moves. Finally, Archaeopteryx skeletons had pneumatized vertebrae and pelvis. This indicates the presence of both a cervical and abdominal air sac, i.e., at least two of the five sacs present in modern birds. This in turn indicates that the unique avian lung design was already present in what most evolutionists claim is the earliest bird. [2] ” Avian features Feathers are present. No other modern animals except birds have feathers. Archaeopteryx had an opposable hallux (big toe). It is a character of birds and not dinosaurs. A reverse toe is however found in theropod dinosaurs and some other dinosaurs. Furcula (wishbone) formed of two clavicles fused together in the midline. Publis elongate and direct backwards. Bones are pneumatic. Premaxilla and maxilla are not horn-covered (or bills are not present). Trunk region and vertebrae are fused. But in other birds they are always fused. Necks are attached to skull from the rear as in dinosaurs, not from below as in modern birds. Archaeopteryx had a long bony tail. Archaeopteryx had teeth. Nasal opening are far forward and are separated from the eye by a large preorbital fenestra (hole). This is typical of reptiles, but not of birds. Fenestra when present in birds when present is greatly reduced, and is involved in prokinesis (movement of the beak). Recent discoveries seem to have shown that there are enough similarities between Archaeopteryx and Dromaeosaur that they can be considered varieties of the same created kind . This includes evidence from Dromaeosaur's feathers that it could fly. Archaeopteryx is dated as 20 million years older than Dromaeosaur. Archaeopteryx could not have evolved from Dromaeosaur. In fact Archaeopteryx is older than most of its alleged ancestors, which is a BIG problem for evolutionists, assuming total and complete replacement (thus extinction) of the original species . [Reference needed] News
Archaeopteryx
What is the Latin word for 'liquid' which we use to mean the fluid produced by the tree Ficus elastica?
New contender for first bird : Nature News & Comment New contender for first bird Feathered creature shakes up avian family tree. Rights & Permissions Artist's impression by Masato Hattori The half-metre tall Aurornis xui, which lived in China some 150 million to 160 million years ago, is believed to be the earliest known member of the bird family tree. A Jurassic fossil that had been languishing in the archives of a Chinese museum may qualify as the first known bird, researchers say. If they are right, it could mean that flight evolved in dinosaurs only once, in the lineage that led to modern birds. The single specimen of Aurornis xui was unearthed by a farmer in China's Liaoning Province and had been unidentified until palaeontologist Pascal Godefroit found it last year in the museum at the Fossil and Geology Park in Yizhou. Nature Podcast Nature's Kerri Smith talks to study coauthor Gareth Dyke. You may need a more recent browser or to install the latest version of the Adobe Flash Plugin. Go to full podcast The specimen measures about half a metre from the tip of its beak to the end of its tail. The feathered dinosaur, which lived some time between 150 million and 160 million years ago, had small, sharp teeth. It also had long forelimbs that presumably helped it to glide through Jurassic forests. “In my opinion, it's a bird,” says Godefroit, who is at the Royal Belgian Institute of Natural Sciences in Brussels. “But these sorts of hypotheses are very controversial. We’re at the origins of a group. The differences between birds and [non-avian] dinosaurs are very thin.” Godefroit and his colleagues describe the fossil in a paper published on Nature's website today 1 . Godefroit says that Aurornis probably couldn’t fly, but that it's hard to be sure because the feathers of the fossil are not well-preserved. Instead, he says, it probably used its wings to glide from tree to tree. But, Godefroit says, several features, including its hip bones, clearly mark it out as a relative of modern birds. Thierry Hubin/IRSNB The Aurornis specimen had lain unidentified in a Chinese museum's archives until it was found by a palaeontologist last year. Evolutionary flight path The once sharp line between dinosaurs and birds has become blurrier in recent years as new feathered fossils have surfaced in China. Godefroit sees a clear continuum from Aurornis to the more advanced Archaeopteryx, whose own place on the avian family tree has long been a matter of controversy.  Godefroit and his colleagues contend that Aurornis is the oldest known member of the Avialae, the group that includes every animal that is more closely related to modern birds than to non-avian dinosaurs such as Velociraptor. With Aurornis rooted at the base of the avian tree, the researchers place Archaeopteryx further up the trunk, firmly within the Avialae lineage, and not with the non-avian dinosaurs as other researchers recently suggested 2 . Godefroit notes that putting Archaeopteryx back within the bird lineage means that powered flight need have evolved only once among birds and dinosaurs. If Archaeopteryx, with its relatively well-developed wings, was more closely related to Velociraptor than to birds, powered flight would have had to evolve twice. Not everyone is convinced of Aurornis’s primacy. Luis Chiappe, director of the Dinosaur Institute at the Natural History Museum of Los Angeles in California, believes that Archaeopteryx is still the oldest known creature that deserves the title of 'bird'. Aurornis, he says, “is something that’s very close to the origin of birds, but it’s not a bird”. But, he adds, it is a “great, interesting specimen that pushes our understanding of the evolution of birds back another 10 million years”. Godefroit says that such institutions such as the museum in Yizhou hold hundreds of yet-to-be described specimens that could further illuminate the picture of avian evolution. “The biodiversity of these small, bird-like dinosaurs was incredible,” he says. Journal name:
i don't know
What is the main use of the tree Citrus bergamia?
Bergamot oil (Citrus bergamia) - information on the origin, source, extraction method, chemical composition, therapeutic properties and uses. Go to shopping cart at bottom of page This fresh smelling essential oil is a favorite in aromatherapy and is great for creating a more relaxed and happy feeling, relieving urinary tract infections, boosting the liver, spleen and stomach, while fighting oily skin, acne, psoriasis, eczema, as well as cold sores. Oil properties The scent of the oil is basically citrus, yet fruity and sweet, with a warm spicy floral quality and is reminiscent of neroli as well as lavender oil. The color ranges from green to greenish-yellow and the oil has a watery viscosity. Origin of bergamot oil This tree is native to South East Asia, but was introduced to Europe, and particularly Italy and is also found in the Ivory Coast, Morocco, Tunisia and Algeria. Bergamot oil is made from a tree that can grow up to four meters high, with star-shaped flowers and smooth leaves, bearing citrus fruit resembling a cross between an orange and a grapefruit, but in a pear-shape. The fruit ripens from green to yellow. The oil is one of the most widely used in the perfumery and toiletry industry and forms, together with neroli and lavender, the main ingredient for the classical 4711 Eau-de-cologne fragrance. It is used to flavor Earl Grey tea. The name is derived from the city Bergamo in Lombardy, Italy, where the oil was first sold. Summary When you are looking for an oil to help with depression, SAD (Seasonal Affected Disorder) or generally feeling just a bit off, lacking in self-confidence or feeling shy, then consider bergamot oil. It also has superb antiseptic qualities that are useful for skin complaints, such as acne, oily skin conditions, eczema and psoriasis and can also be used on cold sores, chicken pox and wounds. It has a powerful effect on stimulating the liver, stomach and spleen and has a superb antiseptic effect on urinary tract infections and inflammations such as cystitis. Burners and vaporizers In vapor therapy, bergamot oil can be used for depression, feeling fed-up, respiratory problems, colds and flu, PMS and SAD. Blended massage oil or in the bath It can be used in a blended massage oil, or diluted in a bath to assist with stress, tension, SAD, PMS, skin problems, compulsive eating, postnatal depression, colds and flu, anxiety, depression, feeling fed-up and anorexia nervosa. Blended in base cream As a constituent in a blended base cream bergamot oil can be used for wounds and cuts, psoriasis, oily skin, scabies, eczema, acne, cold sores as well as chicken pox. Bergamot blends well with Although essential oils blend well with one another, bergamot oil goes particularly well with other essential oils such as black pepper , clary sage , cypress , frankincense , geranium , jasmine , mandarin , nutmeg , orange , rosemary , sandalwood , vetiver and ylang-ylang .   US$ 14.50 for 10 ml ( worldwide postage included ) To shop, click on the "Add to shopping cart" button above, which will add the item to your cart, where we accept Visa, MasterCard, Amex and Diners credit cards. After you have selected an item, other items can also be added to the cart. All products are accessible from the product catalogue , or from the list below. When you are ready to finalize your order, click the "Go to checkout now" button where you will be able to change quantities, delete items, or return if you decide to continue shopping. Remember - no order is final until you decide and place the order. For more information on shopping cart security, please click here . To access our fax form click here . Handling, shipping and postage to any destination in the world INCLUDED (EXCEPT SOUTH AFRICA) - this is the total price - there are no hidden extras.  
Perfume
Which physician developed a type of remedy involving wild flowers?
Bergamot (Citrus bergamia) - Akin's Natural Foods Bergamot (Citrus bergamia) Related Terms 4-4'-5'-Trimethylangelicin, 5,7-dimethoxycoumarin, 5-gernaoxy-psoralen, 5-geranoxypsoralen, 5-methoxypsoralen, 5'-trimethylazapsoralen, 6,4,4'-trimethylangelicin, 8-methoxypsoralen, aceite de bergamota (Spanish), ba gan meng (Chinese), bei jia mao cheng (Chinese), bei jia mi gan (Chinese), bergamot fruit, bergamot orange, bergamota (Portuguese), bergamote (French), bergamotier (French), bergamotorange (Danish), bergamotoranje (Dutch), bergamotta (Italian), Bergamotte (German), Bergamottenbaum (German), Bergamottenzitrone (German), bergamotti (Finnish), bergamottier (French), bergamottihedelmae (Finnish), bergamottin, bergamotto (Italian), bergamotto bigarade orange, bergapten (5-methoxypsoralen [5-MOP]), berugamotto (Japanese), bey armudu (Turkish), C-glucosides, chrysoeriol (7-O-neohesperidoside), chrysoeriol (7-O-neohesperidoside-4'-O-glucoside), citron�k bergamot (Czech), citropten, Citrus aurantium L. ssp. bergamia, Citrus aurantium L. subsp. bergamia (Risso & Poit.) Wight & Arn. ex Engl., Citrus aurantium L. var. bergamia Loisel., Citrus bergamia, Citrus bergamia Risso, Citrus bergamia Risso et Poiteau, coumarins, eriodictyol, essential oil, flavonoids, fragrant balm, hesperetin, huile de bergamote (French), isovitexin, laym�n ad�ly� barnat� (Arabic), limettier bergamotte (French), lucenin-2, monosaccharides, monoterpene hydrocarbons, naringenin, naringin, neoeriocitrin, neohesperidin, O-glycosides, oleum bergamotte, oligosaccharides, oranger bergamotte (French), orientin 4'-methyl ether, pectins, polymethoxylated flavones, poirier bergamotte (French), psoralen, rhoifolin (4'-O-glucoside), Rutaceae (family), scoparin and orientin (4'-methyl ether), stellarin-2, Strauchorange (German), sweet orange, terpenes, xiang ning meng (Chinese). Note: This monograph does not cover the North American plant bee balm, which is part of the family Lamiaceae, genus Monarda. Sometimes Monarda species are called bergamot. Background Bergamot orange trees, indigenous to Calabria, Italy, are part of the Rutaceae family and Citrus genus. The peel of the pear-shaped fruit contains essential oils and other bioactive constituents. Bergamot juice is used for nutritional purposes. The bergamot orange is unrelated to North American herbs also known as bergamot, which belong to the genus Monarda (bee balm or Oswego tea). This bottom line exclusively discusses bergamot orange. The essential oil of bergamot produces the pleasing odor that made it popular in cosmetics and aromatherapy. Because bergamot may cause an adverse reaction to sunlight or ultraviolet light, its usefulness in substances that are applied to the skin is limited. However, research continues on its potential beneficial effects for the skin. Research is also continuing on the antibacterial, antifungal, antioxidant, and neuroprotective properties of constituents in bergamot essential oil. Tradition / Theory The below uses are based on tradition, scientific theories, or limited research. They often have not been thoroughly tested in humans, and safety and effectiveness have not always been proven. Some of these conditions are potentially serious, and should be evaluated by a qualified healthcare provider. There may be other proposed uses that are not listed below. Alzheimer's disease, amyotrophic lateral sclerosis (ALS), analgesic (pain reliever), antibacterial/antifungal, anti-inflammatory, antioxidant, cancer, cancer pain, cosmetic uses (sun tanning), cystic fibrosis, flavoring agent, fragrance (perfumes), gum disease, hair loss, heart disease, Huntington's chorea/disease, hyperlipidemia (high cholesterol), immune suppressant, insecticide (lice), liver metabolic function (CYP450), malaria, memory, mood disorders, neurologic disorders, neuroprotection (protects against diseases of the nervous system), schizophrenia, skin aging, skin damage caused by the sun, stomach disorders, vitiligo (loss of pigment in the skin). Dosing Adults (18 years and older) To treat psoriasis (inflammatory skin condition), bergamot oil has been applied to the skin 30 minutes before ultraviolet B (UVB) therapy, three times weekly. Children (under 18 years old) There is no proven safe or effective dose for bergamot in children. Safety The U.S. Food and Drug Administration does not strictly regulate herbs and supplements. There is no guarantee of strength, purity or safety of products, and effects may vary. You should always read product labels. If you have a medical condition, or are taking other drugs, herbs, or supplements, you should speak with a qualified healthcare provider before starting a new therapy. Consult a healthcare provider immediately if you experience side effects. Allergies Avoid in individuals with known allergy or sensitivity to bergamot, its parts, or plants in the Rutaceae family. When applied to the skin, bergamot may cause sensitivity to sunlight or ultraviolet light. Side Effects and Warnings Bergamot oil appears to be safe when consumed in amounts usually found in dietary sources, or when used as part of aromatherapy in people who are not allergic or sensitive to bergamot oil. Bergamot essential oils have been studied in humans and may be toxic if taken by mouth. Vapors released during aromatherapy may irritate the eyes. Applying bergamot to the skin may cause erythema (redness), changes in skin coloration, sensitivity to sunlight or ultraviolet light, and skin irritation. Use cautiously with other products that cause sensitivity to light. Use cautiously in people with skin conditions. Excessive consumption of bergamot-containing Earl Grey tea was linked to muscle cramps, involuntary muscle contractions, abnormal sensations, and blurred vision. Bergamot may cause low blood pressure. Caution is advised in people taking agents that lower blood pressure. Bergamot may interfere with the way the body processes certain drugs, herbs, and supplements using the liver's cytochrome P450 enzyme system. As a result, the levels of these substances may be altered in the blood and may cause potentially serious adverse reactions. People using any medications should check the package insert and speak with a qualified healthcare professional, including a pharmacist, about possible interactions. Bergamot may lower blood sugar levels. Caution is advised when using drugs, herbs or supplements that may also lower blood sugar. Blood glucose levels may require monitoring, and doses may need adjustment. Use cautiously in women who are pregnant or breastfeeding, due to a lack of sufficient data. Use cautiously in people taking drugs for anxiety. Drowsiness or sedation may occur. Use caution if driving or operating heavy machinery. Avoid in people with a known allergy or sensitivity to bergamot, its parts, or plants of the family Rutaceae. Pregnancy and Breastfeeding Interactions Interactions with Drugs Bergamot may lower blood sugar levels. Caution is advised when using medications that may also lower blood sugar. People taking drugs for diabetes by mouth or insulin should be monitored closely by a qualified healthcare professional, including a pharmacist. Medication adjustments may be necessary. Bergamot may cause low blood pressure. Caution is advised in people taking agents that lower blood pressure. Bergamot may interfere with the way the body processes certain drugs using the liver's cytochrome P450 enzyme system. As a result, the levels of these drugs may be altered in the blood and may cause potentially serious adverse reactions. People using any medications should check the package insert and speak with a qualified healthcare professional, including a pharmacist, about possible interactions. Bergamot may also interact with agents for anxiety, antibiotics, anticancer drugs, antifungals, anti-inflammatory agents, cardiovascular drugs, cholesterol-lowering agents, drugs that affect the immune system, felodipine (a drug that lowers blood pressure), drugs that increase sensitivity to light, or pain relievers. Interactions with Herbs and Dietary Supplements Bergamot may lower blood sugar levels. Caution is advised when using herbs or supplements that may also lower blood sugar. Blood glucose levels may require monitoring, and doses may need adjustment. Bergamot may cause low blood pressure. Caution is advised in people taking herbs or supplements that lower blood pressure. Bergamot may interfere with the way the body processes certain herbs or supplements using the liver's cytochrome P450 enzyme system. As a result, the levels of these herbs or supplements may be altered in the blood and may cause potentially serious adverse reactions. It may also alter the effects that other herbs or supplements possibly have on the P450 system. Bergamot may also interact with antianxiety herbs and supplements, antibacterials, anticancer herbs and supplements, antifungals, anti-inflammatory herbs and supplements, antioxidants, cardiovascular herbs and supplements, cholesterol-lowering herbs and supplements, herbs and supplements that affect the immune system, herbs and supplements that increase sensitivity to light, lavender, orange, pain relievers, or prebiotics.
i don't know
The best longbows were constructed from which wood?
Bow woods Bow building for poor people and apartment dwellers Brought to you by: SAM HARPER Home | Update Archives | Other bow building sites | Thanks to all who helped me | Gallery | Youtube Channel Bow woods There are bow woods I've tried and bow woods I haven't tried. This is a list of bow woods (and grass in the case of bamboo) I've tried or heard a lot about. I'm only considering limb wood material, not handle wood material. Don't limit yourself, though. People are trying new things all the time. Red oak The great thing about red oak is that it's easy to find and it's cheap. It's ideal for somebody who is just starting out. Just about every Home Depot or Lowes I've been to has it. They sell it in the perfect size, too. It comes in a 72" long board they call a 1x2, which is actually 3/4 x 1-1/2. Red oak is very porous, and most of the pores are in the early growth rings, so it's important to find a piece with thick late growth rings or else it will seem brittle. Those boards will feel heavier. If you find a board with very straight grain, you don't necessarily need to back it, but it's a good idea to back any board bow. Bamboo The wonderful thing about bamboo is that you're guaranteed to have straight grain. Bamboo bows rarely fail if done right. Some people call bamboo nature's fiberglass. It's great bow material, and it's cheap. Bamboo comes in different forms�raw bamboo and bamboo flooring boards. If you get the flooring boards, be sure to get vertical grain. The horizontal grain will come apart. Bamboo flooring makes a great bow if backed with raw bamboo. All bamboo bows are my personal favourite. They're quieter than any other bow I've made, and there's just something about the way they feel when you draw them and shoot them that's hard to describe. There's a smoothness about them. The only bad thing I have to say about bamboo bows is that they take a lot of set. It's a good idea to put a lot of reflex in the bow at glue-up if you want to have any left after tillering. Vertical grain flooring boards can also be cut into laminations. People sometimes refer to it as "action boo." It's ideal in the core of a fiberglass bow, because it's light and strong. Raw bamboo is just a slice of a solid piece of bamboo with the nodes still intact. It makes a great backing to almost any kind of bow. Bamboo is very strong in tinsel strength, so it needs to be very thin to avoid overpowering the belly wood. Some woods that are good with raw bamboo backing include osage, yew, ipe, and bamboo, because they can withstand the compression forces. Hickory Hickory is popular for backing bows. Like bamboo, it's very strong in tinsel strength, so it needs to be thin. It's not quite as strong in compression strength. It makes a good self bow, too. I haven't made a self bow out of it, but from what I've read, it's almost impossible to break. Some people question its durability, though. Apparently, it takes a lot of set over time and becomes sluggish. I think this may be due to the fact that hickory sucks up a lot of moisture from the atmosphere. It needs to be a tad dryer than other woods to get the best performance out of it. The only problem with using it to make a self bow is that it's almost impossible to get the bark off of it. I've heard several different methods, the most popular being to put it in a hot shower for 20 or 30 minutes before trying to get the bark off with a chisel. Osage Some people consider osage (bois d'ark) to be a weed, but to those of us who make bows, it's gold. I love everything about osage except for the fact that it's hard to find a straight piece of it without knots. It smells good, it looks good, and it's the ideal bow wood. It lasts forever without taking a set, and it's very strong in compression strength. I can't say enough about osage. I just love it. If only it were easier to come by! Ipe It's pronounced EE-pay. It's the same thing as Brazilian walnut. It's very strong, so you can make thinner and lighter limbs, resulting in a faster bow. It goes well with a bamboo backing. Ipe is used in decks, because it's so resistant to decay. That makes it a good bow wood, too. Some people have allergic reactions to it, so beware. Ash I hate ash. I don't know why anybody bothers with it. It breaks too easily. Cedar (or Juniper) There are different kinds of cedar. That cedar you find at Lowes and Home Depot is pitiful for making bows. It's way too brittle. Juniper (or aromatic cedar) is a lot better, but it's brittle as well. It's great under clear fiberglass, though, because some pieces of it are just beautiful. It smells good, too. Poplar It's tempting to try poplar since it's so cheap and available, but it's too soft to make a decent bow out of. Black Locust Okay, I haven't actually tried this one, but I've heard good things about it. Yew I haven't tried this one either. Yew is supposed to be ideal for making longbows, because it's so strong in compression that it's one of the few woods that can withstand the D-shaped cross section of an English longbow. The problem with it is that it's too expensive. Black walnut I've only used walnut in a fiberglass bow. I think it's beautiful. I didn't like the way it smelled when I first started using it, but the smell grew on me. I've heard it works well in non-fiberglass bows, but I haven't tried it. Bow building for poor people and apartment dwellers
Yew
How many species of domestic dog are found today?
Hurstwic: Viking Bow and Arrow Viking Age Arms and Armor Viking Bow and Arrow Bows were used primarily for hunting, but they were also used in battle in situations where men desired to target their opponents from a long distance away. In mass battles, archers opened the action before the opposing sides closed to fight at close range. Perhaps the most notable use of a bow in the sagas is Gunnar's single-handed defense at his home, Hl��arendi (right), against an attack led by Gizurr hv�ti, told in chapter 77 of Brennu-Nj�ls saga. From a loft in the upper level of the house, Gunnar used his bow to kill or wound ten of his opponents before his bow string was cut by one of the attackers. In the battle on the ice at Vigrafj�r�ur described in chapter 45 of Eyrbyggja saga, the sons of �orbrandr took a defensive position on a rock above the ice, where they had good footing. Stein��r and his men had a hard time on the ice against such a strong defense. Two Norwegians with Stein��r ran a short distance across the ice to where they could fire arrows at those on the rocks, making things very dangerous for �orbrand's sons. The fjord is shown as it looks today in the photograph to the left. The saga says that Snorri go�i's shepherd watched the battle from the rock cliff at the extreme left edge of the photo, then ran back to Helgafell to get help. Bows were used in nautical battles. Once engaged, men on opposing ships fired arrows and threw missiles from one ship to the other, attempting to clear the decks of men so that the ship could be taken. Chapters 106-111 of �lafs saga Tryggvasonar describes the sea battle at Sv�l�r in the year 1000, in which King �lafr was killed. King �lafr was on board his ship, �rmurinn langi (The Long Serpent), shown to the right in a 19th century painting by Sinding. The saga says the king shot most often with his bow, but sometimes threw spears. With him was Einar, described as the best shot anywhere. Einar fired two close shots at Earl Eir�kr, before Eir�ks bowman, Finn, fired an arrow that hit Einar's bow. With the next shot, Einar's bow broke. King Olaf asked, "What cracked with such a loud noise?" Einar replied, "Norway out of your hands, sire." As with other weapons, bows were used to threaten. Before the battle at Sv�l�r, �lfr, one of King �laf's men on board �rmurinn langi, questioned the king's command, implying cowardice on the part of the king. �lafr fitted an arrow to the bow in his hand and aimed it at �lfr, who said, "Shoot in another direction, king, where the need is greater." A reproduction bow is shown to the left. Bows were made from the wood of a yew, ash, or elm. Typically, they were 1.6 to 2m (60 to 80 in) long. A complete bow found at Hedeby was made of yew and was 192cm long. Arrow heads were made in a variety of shapes and sizes. Some historic arrowheads from the 10th century found in Norway are shown in the sketch to the left. The shortest is 12cm (5in) long. Arrowheads are not commonly found in the graves of warriors, suggesting that bows were not thought of as tools for warriors. Most arrowheads are found at house sites, which might suggest that bows were thought of doemstic tools, used for hunting. The forked arrowhead shown to the right was found at a house site in Reykjav�k in Iceland. Arrowheads in good condition were found at several Icelandic house sites dating from the Viking age, and they range in length from 10-15cm (4-6in). Arrowheads had a tang which was driven into a hole in the shaft and secured with cordage and pitch. The tang is clearly visible in the photo to the right. Although evidence is very slight, shafts were probably 70 to 80cm (28-32in) long, and perhaps 10mm (3/8in) in diameter. Shafts were probably made out of hardwood, in order to hold the tang. The estimated draw weight of one 10th century bow is 90lbs (40kgf), and the effective range of this weapon was about 200m (650ft). However, medieval Icelandic law gives a different estimate. The distance of the flight of an arrow, �rdrag (bowshot) was a unit of measure commonly used in Icelandic law. For example, Gr�g�s, the medieval Icelandic law book, requires that the court empowered to confiscate an outlaw's property be held within a bowshot of the outlaw's home (K 62). A later addition to Gr�g�s defines the bowshot to be two hundred fa�mar (about 480m). It seems likely that archers used bows with draw weights to fit their capabilities, so there must have been some variation in the draw weights of bows. After Einar's bow was broken at the battle at Sv�l�r, the king threw him his own bow, telling Einar to continue to shoot. Einar fitted an arrow, and unaccustomed to the king's light bow, drew the head behind the bow. "Too weak, too weak is the king's bow." He threw the bow aside and took up his sword and shield. Available evidence suggests that only longbows were used in Viking lands. However, some intriguing but speculative evidence suggests that composite recurve bows similar to those used in eastern Europe and Asia may have been used in Viking lands. A sketch of an eastern recurve bow is shown to the left, and a photo of a historical eastern recurve bow to the right. Typically, this type of bow was made from multiple materials, such as wood, sinew, and horn or bone. A recurve bow is shaped such that the tips bend away from the archer when unstrung, as is the case in the photo to the right. Both the bows to the right and the left are shown in the same orientation; when strung, the tips of the bow on the right would bend back to the left, as shown in the sketch. Bows made in this manner store more energy for a given bow length. Thus a short recurve bow has a range nearly as great as that of a longbow, offering advantages to archers in situations where the longer bow would be troublesome, such as in dense forests or on horseback. Some historical recurve bows are asymmetric, with the upper limb longer than the lower, as shown in the sketch, making them better suited for use on horseback. The Icelanders referred to these bows as h�nbogi (Hunnish bows), although the only reference to them in the Icelandic saga literature appears to be as personal names (such as H�nbogi inn sterki in Laxd�la saga, and H�nbogi �orgilsson, the father of a 12th century lawspeaker). The term does not appear in any of the Sagas of Icelanders referring to a bow of any kind. Konungs skuggsj� (The King's Mirror), a 13th century Norwegian training text, refers to a hornbogi (horn bow) as being a useful weapon for a mounted warrior, since it is easy to draw while on horseback (chapter 38). The hornbogi may refer to a recurve h�nbogi, made partially of horn. The trail of evidence that suggests the use of recurve bows begins in Brennu-Nj�ls saga, chapter 63. Gunnar and his brothers Kolskeggr and Hj�rtr were ambushed by Starka�r and a much larger band of men. The brothers were able to kill fourteen of the ambushers, while on their side, only Hj�rtr was killed. The battle took place on the shore of the Eystri-Rang� river, near the large stone known as Gunnarssteinn, shown in the foreground of the photo to the right. In the 19th century, erosion brought to light several graves above the river, a short distance from Gunnarssteinn. In the photograph, the grave sites are on the right in the photo, on the other side of the road across from the stone. One of graves contained a decorated ring made of bone. The ring is fairly large: about 3.8cm (1.5in) in diameter. It seems too big to be a finger ring, but too small to be a bracelet. A photo is shown to the left. Interestingly, the ring was decorated with images of harts (stags). The name Hj�rtr means hart. Did the ring belong to Hj�rtr? Is it Hj�rtr who was buried in this grave immediately adjacent to the battle site? The evidence is not convincing, but it is an intriguing coincidence. Archaeologists have speculated further on this find, suggesting that the ring is a thumb ring of the type used by eastern European bowmen to protect their thumbs while using a bow. The draw weight of the eastern recurve bows can be substantially over 100 pounds (45kgf). These bows are drawn with string hooked in the thumb, the strongest digit. The string rests on the ring, which protects the thumb. The use of these kind of rings is well documented in eastern Europe and Asian lands. Combining all these speculative elements together, it has been suggested that the thumb ring indicates that Hj�rtr used an eastern style recurve bow, making it likely that his brother Gunnar, Iceland's most celebrated bowman, may have used one as well. If so, then recurve bows may have been known and used in Norse lands in the Viking era It's certainly plausible that Icelanders and other Norse people came in contact with this kind of bow on trading voyages to eastern Europe and Asia, or in service with the Varangians in Constantinople during the Viking age. Portions of a composite bow have been found at the Viking trading town of Birka in Sweden. Some evidence contradicts this conclusion. The saga says that Hj�rtr was carried home by Gunnar on his shield and buried there, rather than at the battle site. If home means Gunnar's home at Hl��arendi (left), it is a considerable distance from the battle site where the ring and skeletal remains were found. Additionally, surviving eastern thumb rings have a different shape than the bone ring found at Gunnarssteinn, with characteristic features missing from this ring, although at least one modern archer has expressed the opinion that the artifact would serve as an archer's thumb ring. I find the evidence too scanty to support the conclusion that eastern recurve bows were used by Vikings, but perhaps more supporting evidence will come to light in the future. << Previous article
i don't know
What kind of creature is a barnacle?
Barnacle (Cirripedia) - Animals - A-Z Animals Characteristics unique to the animal Latch on to hard surfaces and shell made up from plates Barnacle Location Barnacle The barnacle is a hardy animal that is found in or very closely to sea water. Although it is frequently confused for a mollusc because of its hard outer shell, it is actually a crustacean, closely related to crabs and lobsters. Barnacles are most often seen as roughly circular sessile invertebrates (which means that they cannot move on their own), and are permanently attached to the substrate they live on. In their juvenile form they are free-floating, but eventually they attach themselves to any nearby rock, shell, or other object and stay there for the rest of their lives. Their shells are composed of calcite. Barnacles are often seen on crabs , whales, boats, rocks and on the shells of sea turtles . Although some species of barnacle are parasitic, most barnacle species are harmless, because they are filter feeders and do not interfere with an animal's normal diet and do not harm that animal that they live on in any way. Many species of barnacle are so harmless that in fact, an animal that is covered in them, may not even notice! There are more than 1,000 known species of barnacle that inhabit shallow and tidal waters around the world. Although many species of barnacle are very small, some can grow to as large as 7cm and even bigger barnacles can often be seen. Barnacles typically live for between 5 and 10 years, but some of the larger species are known to be much older. Barnacles attach themselves to animals when they are very young and in the larvae stage of their lives. Once the baby barnacle has effectively glued itself to something hard, a thin layer of flesh wraps around the barnacle and an outer shell is produced. Once the barnacle has an outer shell, it is protected from the elements and all kinds of predators . As soon as the baby barnacle has fixed itself onto something, it is generally there for the rest of it's life. Barnacles are filter feeders (also known as suspension feeders) that feed on food particles that they strain out of the water. The shell of the barnacle is made up of a number of plates (usually 6), with feathery leg-like appendages that draw water into their shell so that they can feed. Barnacles have numerous predators , particularly when they are babies and floating around in the water looking for something to attach themselves to. As the barnacle larvae are so tiny, they float around with the plankton in the water. Once the barnacle is older and has it's tough outer shell, few predators can actually eat it. Humans are known to eat goose barnacles (the only edible species of barnacle) in parts of Europe like Spain and Portugal. Most species of barnacles are hermaphroditic which means that they have both male and female reproductive organs . Although it is possible for barnacles to self-fertilise their eggs, it seems to be very rare so the eggs produced by one barnacle are usually fertilised by another barnacle . It takes more than 6 months for the barnacle larvae to start developing into the hardier adult barnacles. Barnacles are thought to be one of the oldest surviving creatures on the planet as they are believed to date back millions of years. Although there will have been some adaptations, the barnacle is thought to have changed very little over that time. Despite the rising levels of pollution and changes in the water, barnacles are thought to be one of the few animals that are not greatly affected. The barnacle slides two of it's six plates across to let water in when it is feeding and then closes them again which prevents the barnacle from being too exposed to dirty water. Share This Article
Crustacean
Machiavelli used which plant's name as the title of one of his books?
Barnacles Acorn barnacles. Photo courtesty of the Lloyd Center for Environmental Studies Barnacles Imagine spending most of your life standing on your head and eating with your feet! Sound like a difficult way to get through the day? Well, that's exactly how barnacles spend most of their lives. If you walk along the sea shore, you can find barnacles on almost any solid surface that gets covered by water. On rocks, dock pilings, boats, even mussels, you can find clusters of these hard, white, cone-like houses. That's where barnacles live, peeking out only when water covers them so they can filter food into their homes. This "barnacle zone" is the highest of the intertidal zones. Although they may look like mollusks with their shell-like covering, barnacles are actually crustaceans , related to lobsters, crabs and shrimp. They look like tiny shrimp in their larval stage, where they swim as members of zooplankton in the ocean. When they are ready to settle down, they search for a suitable site, pulling themselves along by the adhesive tips of their antennae. Biologist have observed barnacles in the laboratory taking as long as an hour to pick a location. In nature, barnacles may take days to find a suitable spot, investigating one area, then allowing the currents to carry them to another. After selecting a spot, the barnacle secures itself head-first to the surface with a brown glue. (This glue is so strong, the barnacle's cone base is left behind long after the creature has died. Dentists are now studying this glue for its adhesive properties.) Now the larva is ready to grow into an adult and build its tough housing. From "The Intertidal Zone." Courtesy of Bullfrog Films and the National Film Board of Canada. The barnacle secretes the calcium-hard plates which totally encase them. These white cones have six nearly fitted plates that form a circle around the crustacean. Four more plates form a "door" which the barnacle can open or close, depending on the tide. When the tide goes out, the barnacle closes shop to conserve moisture. As the tide comes in, a muscle opens up these four plates, and the feathery legs of the barnacle sift the water for food. All six pairs of these feather-like feeding appendages, called cirri, are jointed and set with sensory hairs which brush through the water collecting plankton for the barnacle to eat. The legs also have gills for gas exchange.
i don't know
What is the name of the evolutionary theory suggesting that evolution has an uneven pace?
Modern Theory of Evolution For those that love God's creation Modern Theory of Evolution There is a modern theory of evolution. There are some things we have learned over the last 150 years since Charles Darwin first proposed his theory of "descent with modification." Darwin's theory was amazingly accurate considering the state of scientific knowledge in 1859. Darwin knew nothing of DNA or genes, backbones of the modern theory of evolution. He even leaned toward Lamarckism, the belief that traits developed during our lifetime would pass on to our children. Nonetheless, The basics of Darwin's theory of evolution were exactly right and have passed every test with flying colors for 150 years: My pages are prone to being a little long. I can't do you good service and make this page shorter. A fast reader can read this page in under 3 minutes. If you want to skim it, I have highlighted portions that will allow you to get the gist of the page. Nature's imperfect reproductive methods regularly produce mutations, so that there are always unique individuals. Individuals which, as a result of those mutations, are better adapted to their environment will have more offspring, either because they survive more often or are better able to attract mates. Those more suitable adaptations will be prone to spreading through an entire population. Over time, as those adaptations accumulate, populations are modified into new species. Given the immense amount of geologic time on this earth, this process, known as "natural selection," has produced all life on the earth from one or a few parents. This basic idea of descent with modification has been backed up on every front. Here, however, are some of the new things that we've learned over the last 150 years. DNA: The Book of Life The modern theory of evolution is able to speak much more clearly about how evolution happens due to the discovery of DNA, the genetic code that controls all natural life. Charles Darwin was able to say: Therefore I should infer from analogy that probably all the organic beings which have ever lived on this earth have descended from some one primordial form, into which life was first breathed. (On the Origin of Species, ch. 14) Darwin had to base that on the following similarities between all living things: Common chemical composition Common germinal vesicle (this is a nucleus that is formed when a cell begins to split in two) Common cellular structure Common laws of growth and reproduction As an example of these commonalities, he cited a common reaction to poisons, so that a gall-fly's secretions creates the same growth on a wild rose as it does on an oak tree (ibid.). From the modern theory of evolution we know that the similarities are far more than he could have imagined. Every living cell has a common code, the simple 4-letter code of DNA, that controls its growth and reproduction. Every human, every insect, every plant, and every bacteria consists of cells made of proteins that are coded for by DNA. All DNA is transferable. Some viruses, which are not even cells but mere snippets of DNA, have even been assimilated into human DNA during our evolutionary history (see Wikipedia, "Human Endogenous Retrovirus" ). Even more interestingly, today insulin for diabetics is produced by taking human DNA and putting it in bacteria or yeast cells so that they produce the exact insulin that our bodies produce. Here's how the International Diabetes Foundation describes it: Rather than being extracted from human pancreases, commercially available human insulin is manufactured through recombinant DNA technology, in which the gene for making human insulin is transferred into simple cells such as bacteria or bakerís yeast. The insulin made by those cells is identical to insulin made by the human pancreas. (From www.idf.org ) Whether you object to evolution or not, you have to admit that this method of insulin production is amazing. The modern theory of evolution takes into account the genetic code, which Darwin could have known nothing about. Punctuated Equilibrium We'll cover punctuated equilibrium as the next aspect of the modern theory of evolution because it's so well known among anti-evolutionists. Punctuated equilibrium is a theory developed in the 1970's by Stephen Jay Gould and Niles Eldridge. They publicized their theory as a drastic reformation of Darwinistic evolution, and young earth creationists loved it. They loved the description of high-speed evolution, and they loved the attack on traditional "gradualistic" Darwinism. Unfortunately, it was not as drastic a reformation as Gould and Eldridge made it out to be. Simply put, in the modern theory of evolution punctuated equilibrium (or "punk eek," as its opponents like to call it) says that species will tend to be stable for long periods, having adapted to their environment and their competition. Only when some change in the environment arises will the species begin to evolve again, and they will do so rapidly in response to new selection pressure (created by the change in environment). Thus, Gould and Eldridge named it "punctuated" (periodic points of evolution) "equilibrium" (a general stability most other times). The problem is, there's nothing really new about the idea except how far they took it. Even back in 1859, Darwin talked about evolution happening in a punctuated manner ("at long intervals"): I do believe that natural selection will always act very slowly, often only at long intervals of time, and generally on only a very few of the inhabitants of the same region at the same time. I further believe, that this very slow, intermittent action of natural selection accords perfectly well with what geology tells us of the rate and manner at which the inhabitants of this world have changed.(ibid., ch. 4, emphasis mine) Darwin "very slow" evolution is actually at least somewhat similar to Gould and Eldridge's very fast evolution. Both are talking about change occurring over thousands of years interrupted by even longer periods of stasis. Darwin calls the intermittent spurts slow, and Gould calls them fast. This has more to do with their reference points than a difference in idea. Ten thousand years to effect a change at the species level would have been a "long interval of time" to Darwin, but it was rapid to Gould, a modern scientist whose conception of geologic time scales would have been much larger (and much more accurate) than Charles Darwin's . Gould and Eldridge did propose that evolution occurs faster than had been previously suggested, and they did suggest longer periods of stasis (stability without change). Nonetheless, punctuated equilibrium, while an addition to the modern theory of evolution, was no threat to Darwin's original theory. Gene Expression One very recent aspect of the modern theory of evolution actually gives a little credit to long-discredited Lamarckism. It is not just our genes that control our development, but also how those genes are expressed. A recent issue of National Geographic explains: Just as you don't need different words to write different books, so you don't need new genes to make new species: You just change the order and pattern of their use. (Matt Ridley, "Modern Darwins": February, 2009; p. 70) The author provides several examples, one of which is: The pattern of gene expression that builds the bones in [a paddlefish's] fins is much the same as the one that assembles the limb in the embryo of a bird, a mammal, or any other land-living animal. The difference is only that it is switched on for a shorter time in fish. (ibid., p. 71) Matt Ridley's article is talking about DNA sequences that control the expression of specific genes; therefore, he is still talking about actual genetic changes. The modern theory of evolution is considering something still more unusual, however: Recently, however, Lamarck's name has been creeping back into the scientific literature. The reason: an explosion in the field of epigenetics, the study of changes in genetic expression that are not linked to alterations in DNA sequences. Some of these epigenetic changes can be passed on to offspring in ways that appear to violate Mendelian genetics. (Robert Koenig, "Uphill Battle to Honor Monk Who Demystified Heredity"; Science, April 7, 2000, Vol. 288, Number 5463, p. 38) This isn't quite Lamarckism, but the article is suggesting that changes to a parent's hormones can affect gene expression in offspring. In other words, changes in your health and fitness may affect the way your child's body uses the DNA you give it. Summation of the Modern Theory of Evolution Darwin's original theory is still intact. The modern theory of evolution just happens to give a lot more information on <strong><em>how </em></strong>evolution occurs. We know now that <strong>every living cell of every living creature is programmed by the genetic code contained in the DNA of the cell</strong>. All life as we know it uses DNA, and DNA of one species can be moved to the cell of any other species. We are even using this truth to get bacteria and yeast to produce insulin for humans. We know that evolution is not as constant as Darwin might have thought it was. Evolution tends to progress in surges as the environment changes, and those surges can be quite rapid from an evolutionary standpoint. We are also learning that while DNA contains genes that programs the growth and development of all living things, there are other factors that influence gene expression. Even today we do not understand this fully, and the new science of epigenetics has taken on the task of learning about these factors. Overall, though, the modern theory of evolution has only established and confirm Darwin's basic idea of descent with modification and left us marveling at his incredible insight into the progress and development of life on earth.
Punctuated equilibrium
Which step in photosynthesis is responsible for splitting water molecules?
ActionBioscience - promoting bioscience literacy Evolution of speciation ideas, from gradualism to punctuated equilibrium. Source: Wikimedia Commons; author: wooptoo. Evolution of ideas on speciation Darwin’s idea of evolution: it’s a slow and gradual process. Darwin The beginning of Darwin’s title for his epochal book is On the Origin of Species….1 The Origin, of course, was the work that convinced the thinking world that life has evolved; and its title tended forever after to equate the term evolution with the “origin of species.” To Darwin: Species evolve through the development and further modifications of adaptations under the guidance of natural selection. For the most part, evolutionary change was a slow, steady and gradual affair. Species are temporary stages in the continuous evolution of life. 1930s and 1940s New thinking on species developed in the 1930s and 1940s. Geneticist Theodosius Dobzhansky2 and systematist Ernst Mayr3 developed the idea that: Species must adapt to environmental change to survive. Species are reproductive communities, with their members capable of interbreeding among themselves, and not, as the general rule, with members of other species. Evolution of new species centers on how changes occur in adaptations so that an ancestral species is split into two (occasionally more) descendant species, with interbreeding no longer possible between the members of what have evolved into descendant, or “daughter,” species. When members of a species become separated by geography, they will eventually become separate species. In general, both biologists argued that physical, geographic isolation must be a precursor to speciation. In this, the notion of “allopatric speciation,” environmental change might be imagined to separate previously continuous species distributions. A seaway, for example, might develop between two formerly connected areas of land; conversely, land might emerge separating once connected oceans — as it happened 2.5 million years ago when the Isthmus of Panama was completed, and the connection between the Caribbean Sea and the Pacific Ocean was finally broken. Though biologists disagree on the extent of evolutionary change — and true speciation — among marine species on either side of the Isthmus, as we shall see below, the evolutionary effects of this environmental change were actually global in extent. Thus we have two major connections drawn between environmental change and evolution by the time the centennial of Darwin’s book rolled around in 1959: Darwin’s image of natural selection tracking environmental change, thus modifying adaptations Dobzhansky- Mayr’s picture of speciation in geographically isolated regions which may reflect a result of environmental change as well There is an ecological pattern to how species arise and die out. 1960s and 1970s Although Darwin’s perspective was being redefined by new discoveries in genetics in the 1960s and 1970s, geologically-trained paleontologists were discovering repeated patterns in the history of life, supporting the validity of Dobzhansky’s and Mayr’s insights of the previous decade. For example, Eldredge4 and Eldredge with Stephen J. Gould5 rediscovered the pattern of remarkable species stability (“stasis”) that was first discussed by paleontologists in Darwin’s time. Species can survive and remain unchanged for millions of years. Towards a modern view Paleontologists now generally agree that stasis — where species may persist in recognizably the same form, with little or no accumulated change, for millions of years (5-10 million in marine species; somewhat shorter durations in the more volatile terrestrial environments) — is a common phenomenon. Nineteenth century evolutionists essentially ignored stasis, so contrary to the Darwinian perspective it did seem. But Eldredge and Gould, in their notion of “punctuated equilibria,” saw that stasis fits in well with the Dobzhansky-Mayr notion of speciation: species arise by a process of splitting this may happen relatively quickly (5-50,000 years, say) compared with the vastly longer periods of time in a species history it all occurs between a species’ origin via speciation and its eventual extinction. Examining stasis But why such stability? What, in other words, causes stasis? Ecologists and evolutionary biologists have recently joined in the search for explanations of stasis. Currently, two general categories of explanation of the evolutionary phenomenon seem to be favored: View #1 Instead of prompting adaptive change through natural selection, environmental change instead causes organisms to seek familiar habitats to which they are already adapted. In other words, “habitat tracking,” rather than “adaptation tracking” is the most expected biological reaction to environmental change — which is now understood to be inevitable. For example: In environmental upheaval, some species migrate to habitats to which they are adapted. During the past 1.65 million years, there have been four major, and many minor, episodes of global cooling resulting in the southward surge of huge fields of glacial ice in both North America and Eurasia. Yet, despite this rhythmically cyclical pattern of profound climate change, extinction and evolution throughout the Pleistocene was surprisingly negligible. Instead, ecosystems (e.g., tundra, boreal forest, mixed hardwood forest, etc.) migrated south in front of the advancing glaciers. Though there was much disruption, most plant species (through their seed propagules) and animal species were able to migrate, find “recognizable” habitat, and survive pretty much unchanged throughout the Pleistocene Epoch. Botanist Margaret Davis6 and colleagues, and entomologist G. R. Coope7 have provided especially well-documented and graphic examples of habitat tracking as a source of survival of species throughout the Pleistocene. View #2 Species also remain stable because of the very nature of their internal structural organization; all species are broken up into local populations that are integrated into local ecosystems. This means that: Natural selection acts differently on related species living in different habitats. A population of the American robin, Turdus migratorius, faces a very different existence in, say, the wet woodlands of the Adirondack Mountains in the Northeastern United States, compared to what the local populations of the same species experience in Santa Fe, New Mexico. Such disjunct populations encounter very different food, water availability, ambient temperatures, potential predators, and possibly even disease vectors. This of course implies that natural selection (as initially seen by Sewall Wright8,9) will act very differently on such disjunct populations. Many species have extensive geographic ranges similar to the American robin; it is difficult to imagine how natural selection under such circumstances can “push” an entire species into a single evolutionary direction over a long expanse of geological time. Rather, the semi-separate evolutionary histories of local populations imply that no net change will accrue species-wide through geological time. Speciation is often the result of environmental adaptation. The phenomenon of stasis — by now empirically documented as typical of most species of Metazoa and Plantae for at least the past half billion years — means that most adaptive evolutionary change actually occurs in conjunction with speciation. This is a rather surprising result on the face of it, and certainly not one anticipated by Dobzhansky, Mayr or other biologists who initially established the importance of species and speciation in the evolutionary process. For why should it be that the origin of species — new reproductive communities — should also entail, as a general rule, most adaptive evolutionary change in general? Yet that is what the fossil record of life’s evolution seems to tell us. Punctuated equilibrium theory: long periods of stability followed by abrupt extinction of species. Current thinking on speciation Light on these crucial evolutionary issues has been shed over the past twenty years. Key to the solution is the documentation, by paleontologists working up and down the geological record of the entire history of life, that evolution occurs in coordinated fashion in many different species lineages living in a regional ecological setting. For example: The original example of “punctuated equilibria” involved patterns of stasis and evolutionary change in trilobites of the Phacops rana species group.4 These trilobites are just one of perhaps as many as 300 such species groups preserved in a 6 to 8 million-year long span of time beginning some 380 million years ago. They are found in Middle Devonian rocks that record the history of marine environments, species and ecosystems in all of Eastern and Central North America. Traditionally, evolutionary biologists have focused on single evolutionary lineages. Though many other species (of brachiopods, mollusks, bryozoans, etc.) also seemed to be showing patterns of stasis, origination and extinction very similar to the trilobites I was studying, I deferred studies of all these very different species to the appropriate experts. This is the main reason why the important pattern of “coordinated stasis” escaped attention for so long: paleontologists by and large must stick to the groups with which they have developed professional expertise. The term coordinated stasis refers to a pattern10 New, unrelated species often appear at about the same time after an extinction event. where most of the species appear at roughly the same time species persist for millions of years, all more-or-less in stasis then, abruptly and again in lockstep fashion, a high percentage disappear in a category of ecological/evolutionary event that Elisabeth Vrba refers to as a “turnover pulse.”11 This pattern can be seen in Cambrian trilobites 500 million years ago, marine invertebrate faunas from the mid-Paleozoic through the Mesozoic and Cenozoic, dinosaur faunas of the Mesozoic and in mammalian faunas of the Cenozoic. In other words, the phenomena associated with “punctuated equilibria” are regionally ecosystem-wide, and involve many different, unrelated species — species whose patterns of evolution, persistence, and extinction occur in near simultaneous fashion. This, perhaps the dominant signal in the evolutionary history of life, is thus profoundly “cross-genealogical” — meaning that such turnover events have causal roots that are deeply ecological — and arise, at base, from large-scale changes in the physical environment. Here, in other words, we finally understand how the physical environment, via ecological systems, impinges on the processes of speciation and extinction. Here, briefly, are two examples that reveal the nature, and inner dynamic workings, of these ecological/evolutionary patterns: Example #1 Brett and Baird have documented some eight successive faunas of marine invertebrates in the Appalachian Basin of the Middle Paleozoic.10 Marine invertebrate pattern: about 20% survive after each major extinction. Each fauna survives an average of 5-7 million years. Ranging from only a few dozen known species to the 300 or more known from the Middle Devonian sequence mentioned above, most of the component species are present at the very beginning of the sequence. Most persist unchanged throughout the sequence, but then, abruptly, most disappear. Only, on average, 20% of the species manage to survive to the next successive faunal interval. The new species that comprise the next succeeding marine regional system are either newly evolved or migrate in from adjacent regions. Causes of the ecosystem collapse/extinction/new speciation events are incompletely understood, but apparently involve abrupt changes in sea level — most likely reflecting global cooling or warming events, which lower or raise sea level, respectively, by altering the size of the earth’s ice caps. Example #2 Vrba’s original example of a “turnover pulse” is based on events culminating at about 2.5 million years ago in Eastern and Central Africa.11 New species either appeared or migrated to grasslands after an extinction event in Africa. A global cooling event, beginning circa 2.8 million years ago, apparently caused a relatively abrupt reorganization of African ecosystems after about 300 thousand years. Cooler and drier conditions brought about a radical change in African vegetation patterns, where large expanses of grasslands replaced the formerly dominant wet woodlands. Ecologically generalized species, such as impalas, managed to survive unscathed, but many wet-woodland-adapted species (e.g., antelope) disappeared — either through habitat tracking or outright extinction. Concomitantly, animal species adapted to open savannahs soon appeared — either by habitat tracking of existing species into the region or via actual speciation. These included two new hominid species, such as the first members of the genus Homo, along with the oldest known stone tools, which also appear at 2.5 million years ago. Global cooling triggered new ecosystems and new species 2.5 million years ago. It is Vrba’s special insight that ecosystem decay and fragmentation may lead, not only to habitat tracking in and out of a region, and to true extinction, but to true speciation as well. Recall that fragmentation of a species’ original geographic range, as first developed fully by Dobzhansky and Mayr, is a prerequisite to allopatric speciation. Also, note the date of this African disturbance: 2.5 million years ago — just when the Isthmus of Panama rose — and, according to some geologists, created the Gulf Stream, thought by some to have triggered the global cooling pulse that had such a profound effect on the African biota. Elsewhere, I have also suggested that the patterns of speciation in South America that occasioned Hafner’s “refugium” hypothesis in all likelihood reflect the very same sets of ecological and evolutionary processes - through the very same causes12 — as documented and discussed by Vrba.11 Conclusion Speciation, then, is integral to the evolutionary process: Natural selection shapes most evolutionary adaptive change nearly simultaneously in genetically independent lineages as speciation is triggered by extinction in “turnover” events. When physical environmental events that go “too far too fast” start triggering regional, species-level extinction, then evolutionary change — predominantly via speciation — occurs. In times of environmental normalcy, speciation and species-wide evolutionary change are comparatively rare. © 2000, American Institute of Biological Sciences. Educators have permission to reprint articles for classroom use; other users, please contact [email protected] for reprint permission. See reprint policy . Paleontologist Dr. Niles Eldredge, is the Curator-in-Chief of the permanent exhibition “Hall of Biodiversity” at the American Museum of Natural History and adjunct professor at the City University of New York. He has devoted his career to examining evolutionary theory through the fossil record, publishing his views in more than 160 scientific articles, reviews, and books. Life in the Balance: Humanity and the Biodiversity Crisis is his most recent book. http://evolution.berkeley.edu/evosite/evo101/VSpeciation.shtml American Museum of Natural History This museum has an extensive site explaining cladistics, which is a method of determining evolutionary relationships between creatures. The site focuses on the museum’s dinosaur exhibits, hall of vertebrate origins, and extinct mammals. http://www.amnh.org/exhibitions/Fossil_Halls/fossil-halls2.html Univ. of California, Berkeley, Museum of Paleontology This museum’s web site has many resources on the history of evolutionary thought, fossils, phylogenetic relationships between creatures, and geological history. The site is organized by geological time periods, allowing users to explore the fossils as well as the geology and environment of times in the past. http://www.ucmp.berkeley.edu/exhibit/exhibits.html Read a book The Pattern of Evolution by Niles Eldredge offers a fascinating exploration of the way we investigate and understand the evolution of Earth and the life on it, describing how the key issues and events in science over the past two centuries have brought us to the brink of a more holistic understanding of the planet. An interesting and highly readable essay in search of patterns in organic and inorganic nature in support to illustrate the evolutionary process within an ecological context (1999, W.H. Freeman, New York). getinvolved links
i don't know
Which kind of organisms are likely to show a 'taxis'?
HOW AND WHY WE CLASSIFY LIVING ORGANISMS Content Updated: 20th July 2008 Generally-speaking, we humans have a desire to label and categorize things – hands up those who keep their t-shirts in a separate drawer to their underwear and/or arrange them in order of most recently worn or colour. Coupled with our desire for order (some teenagers excluded!), is an equally strong desire to name things we’ve sorted. The great Chinese thinker and philosopher K'ung Fu Tzu (better known by his Latin name: Confucius) is widely credited as being the source of the old Chinese proverb: “The beginning of wisdom is to call things by their right name.” But, what’s the point of naming things? Why go to the hassle of trying to give every novel object its own name? We name objects because it makes our life easier. Let’s say you’re sitting on the sofa and you want your friend to pass the remote so you can see what else is on the TV – this process is rather difficult without names. A request like, “Please pass the thing on the thingy. I want to see what’s on the whats-am-a-jig”, is likely to meet with confusion. The request is easier for the other person to follow if things have names: “Please pass the remote on the coffee table. I want to see what’s on the TV”. Now, it’s true that you might be able to gesticulate at your friend until he or she either gets the idea, or misinterprets and takes offence, but what if you can’t see the person you need help from – charades doesn’t help then. Imagine that you’re sitting on the train going to work when you remember you forgot to get the pie out of the freezer to defrost in time for dinner; fortunately your partner has the day off and is at home. So, you phone up and ask “Can you get the thing out of the thingy so it’s thingy-ed in time for what’s-its-name?” Again, confusion reigns. Gesticulating won’t help because the other person can’t see you (although it might make a dull train ride more interesting for your fellow passengers!). The instructions can be followed when we insert the names: “Can you get the pie out of the freezer so that it’s defrosted in time for dinner?” So, the act of naming is a matter of convenience – whether the objects are pieces or furniture, bits of machinery, or animals we assign them names because it makes life a heck of a lot easier for us. We, for example, call a ‘fish’ with a cartilaginous skeleton and between five and seven pairs of gills a “shark”. This allows us to tell another person what animal we’re looking at or talking about. The use of a name certainly helps, but not without problems. Telling someone that you went diving with sharks while on holiday is kinda like saying you went out for dinner with some primates; it’s not quite as specific as we might want because there are lots of different ‘types’ of primates (and sharks). Consequently, to make our meaning as clear as possible, objects (be they animals, plants, bacteria, furniture, tools, etc.) are split into as narrow groups as possible and each group is given a name. So, for example, the group of ‘fish’ we call sharks gets further split up into different types of sharks based largely on how they look (their “morphology”), both internally (i.e. their skeleton, internal organs etc.) and externally (i.e. fins, gills, skin, colour etc.). Large groups are then split into smaller (i.e. more specific) ones and so on down the line until you have a group containing all the animals considered to be exactly the same in terms of the features we’re looking at (these can be morphological, genetic, ecological, biochemical, even behavioural): this is the species level (we’ll look at this in more detail later). Humans, chimpanzees, great white sharks, blackbirds, palmate newts and red squirrels are all examples of species. Some taxonomists opt to take the splitting below the species level and group animals into subspecies, infraspecies and forms (among others). Perhaps the extreme of this splitting is found in the human species, where every individual of the species is given his/her own name at birth. The problem is that this gets very complicated very quickly as the list of viable names soon runs out and leads to the confusing situation of several individuals with the same name – think how confusing it can be if there are two or three people in the office with the same name. Consequently, the branch of Science known as “Taxonomy” (from the Greek word taxis, meaning “order” or “rank” and –nomia, meaning “law”) is largely concerned with the grouping of organisms down to the species level. This process of giving each species a name is all well and good (it certainly makes it easier to be precise in our communications), but there’s a snag. In order for the system to work, everyone must call that “something” by the same (universally agreed) name – if the process isn’t regulated we can run into problems. Such problems are rife with “common names”. Here in the UK, we have an awesome bird of prey called a Peregrine falcon (the fastest bird in the world, clocked at speeds of 87mph / 140kmph during a dive - left). In North America, however, the same bird is more commonly known as the Duck hawk, after its impressive ability to nab ducks in mid-air. Anyone who wasn’t aware of this ‘double identity’ could reasonably assume that we were talking about two different species. The problem gets exponentially more complicated when local names, different languages and different dialects are taken into account. So, how do we get around this? Well, we do it by giving most species known to Science two names: a vernacular (common) and a scientific (often referred to as Latin, but more accurately a Latinized-Greek) one. While it’s true that not all species have a vernacular name (e.g. many bacteria, mosses, lichens etc.), this isn’t a major issue because it is the Latin name that’s the important one; it’s designed to remove confusion caused by dual identities. I will return to our falcon example shortly, but first let’s take a brief look at how we arrived at the classification system we embrace today and how we use it to assign animals a unique Latin name. Birth of the binomen Carl von Linné (also variously referred to as Carl Linnaeus, Carolus Linnaeus and, more colloquially, the ‘Father of Taxonomy’), is largely responsible for the way we classify creatures today. Linné was born in Sweden during May of 1707, and transferred from a study of medicine to a study of plants in 1728. In 1735, he returned to his study of medicine, completing it before going on to publish the first edition of his classification of living things (titled Systema Naturae), in which he listed all types of animals that he knew of. Systema Naturae began life as a small pamphlet but, by 1758 -- when the tenth edition was published -- it had become a multi-volume opus. Not only did Linnaeus list all the animals he knew about, he also grouped them according to his own hierarchical scheme of perceived relatedness (i.e. how similar they looked to one another). Despite some controversial aspects, Linnaeus’ scheme has proven to be robust and much of it remains to this day. The system comprises a series of levels, or categories, called taxa (singular being taxon) and assigns each species a binominal name. All scientific names ascribed to species are initially binomial (i.e. they are composed of two parts), consisting of a generic (i.e. genus-related) and a specific (i.e. species-related) epithet – where further splitting occurs, the organism may be assigned a trinomial (three-part) name, to show that it’s a subspecies. In the standard taxonomic hierarchy, there are seven taxa, with the species name sitting at the lowest (referred to as the “basic”) level. In other words, (subspecies notwithstanding) the species name represents the narrowest grouping. While seven taxa are pretty standard in a classification scheme, the total number can be higher – the largest I’ve seen has 76! The number of taxa (and the names ascribed to them) can also vary according to whether the species you’re classifying is an animal, plant, bacteria etc. Nonetheless, regardless of the number of taxa sitting above it, the species level is the only one that can truly be considered “natural”, because everything above it is largely subjective – different taxonomists may place a given species in different taxa, but the species epithet will remain the same. One for all and all for one Each taxonomist generally has his or her own ideas about how animals and plants are related to each other, and few ever agree on a single (universal) taxonomic scheme for anything. Fortunately, this is not an insurmountable issue; these fanciful Latin/Greek names are constructs of our own convenience (serving to satiate our desire for grouping things) and have no relevance in the wider world of Nature. After all, whether I file my ELO CD under ‘Rock’ or ‘Pop’ doesn’t change the CD itself, any more than choosing to classify a duck within the Tubulinida (the class containing the amoebas) makes it less of a bird and more of a protozoa. The scheme we use just represents how closely related we think the critter in question is to other critters assigned to different species. As a result, no single taxonomic scheme is inherently ‘better’ than another. All that really matters is that the resulting scheme best fits the evidence you have. Having read this far, you might be wondering what the point of classifying animals is if it has little relevance in Nature. Well, classification is essential when it comes to drawing up protection for species. In a fascinating article to Scientific American (June 2008), science writer Carl Zimmer provides a nice example of this involving wolves. In the southern USA, there is a considerable conservation effort to save the Red wolf, which is considered a separate species from the wolves in Canada and the eastern USA. Some scientists, however, argue that the red wolf is just an isolated population of the Canadian species, which -- if true -- means that the US government hasn’t actually been saving a species from extinction, because thousands still reside just across the border. In his article Mr. Zimmer also notes that proper classification of microbes could allow public health workers to anticipate outbreaks of disease and prepare a response. So, the point here is that taxonomy is about more than just scientists arguing over which scheme they think best suits a given species; it has deep roots in our understanding of species relationships and in the protection of the natural world. Anyway, enough with the preamble – how do we actually put plants, animals and microbes into these groups? Taxonomic levels I have already mentioned that there are at least 76 taxonomic levels that one could use to build a detailed classification. Seven taxa are, however, usually sufficient, and they are: Kingdom; Phylum (from the Greek phulon, meaning “race”); Class; Order; Family; Genus; and Species. Precisely how these are defined and allocated varies according to the type of organism (animal, plant, bacteria etc.) you’re trying to classify. In Linnaeus’ original scheme, objects were grouped into one of three Kingdoms: Animalia (animals); Vegetabilia (plants); or Mineralia (minerals) – hence the familiar “animal, vegetable or mineral?” expression. As our knowledge of the natural world grew (and keeps growing) taxonomists found that these three kingdoms weren’t sufficient to do justice to the enormous diversity of life on Earth. We now recognize six kingdoms: Plantae (plants), Animalia (animals), Fungi (fungi and moulds), Eubacteria (the bacteria – sometimes called Monera); Archaea (microbes similar to bacteria); and the Protista (something of a dumping ground for all multi-cellular organisms that don’t fit into any of the aforementioned groups – sometimes called Protoctista). Despite some quite apparent differences between the two, a few textbooks merge the Eubacteria and Archaea into a single kingdom: the Prokaryota. Depending on the scheme you choose to follow (and they’re changing all the time!), the kingdoms break down roughly as follows: * Plantae is divided into about 12 phyla and comprise about 270,000 species. * Animalia is split into about 33 phyla and contains about 800,000 species (although this is probably a drastic underestimate of the true figure). * Fungi have five phyla and about 100,000 species. * Eubacteria have three phyla and a number of species that is difficult even to estimate – some authors suggest 1,000,000,000 (a billion) but even this could be a considerable underestimate! * Archaea are poorly known and there are currently three main (and five tentative) phyla that have been created based largely on laboratory cultures (estimates of total phyla range from 18 to 23). The most recent list I can find (1999) contains 209 species. * Protista comprise some 20 to 50 phyla and about 23,000+ species. If we dig a little deeper and look at an example of a ‘standard’ classification, we can see how these taxa are arranged. In the structure below I have set out the currently accepted taxonomic scheme for that most infamous of all sharks: the Great White (right). Kingdom: Animalia (mobile critters; have many cells; can’t make their own food) Phylum: Chordata (flexible skeletal rod with accompanying nerves) Class: Chondrichthyes (‘fish’ with a cartilaginous skeleton) Order: Lamniformes (‘Mackerel’ sharks) Genus: Carcharodon (from the Greek carcharos meaning “ragged” or “pointed” and odon meaning “tooth”) Species: carcharias (Greek for “shark”) Working down from the above, the scheme moves from a very broad taxon (i.e. Animalia – thousands of species), to a slightly narrower one (the class containing just the cartilaginous fishes – almost 1,500 species), to a narrower one still (containing only ‘mackerel’ sharks – about 15 species) and so on down to the narrowest one (i.e. species – just one). Origin of a scientific name The scientific name given to an organism is usually based either on a description of it, the region in which the animal/plant is found or the person describing it for the first time (or a combination of these). The name Myotis macrotarsus, for example, was given to a cave-roosting bat from the Philippines and translates roughly to “mouse-eared bat with big feet” – an apt description of the critter! Similarly, in our scheme above, the genus and species epithet combine to form a rather pertinent description of the Great White shark (pointed-toothed shark!). The Giant otter, on the other hand, was first described from Brazil and is given the scientific name Pteronura brasiliensis, while the South African Lantern shark, Etmopterus compagnoi, was named after taxonomist Professor Leonard Compagno (at the South African Museum). In a few instances, an organism may be given a scientific name that illustrates a particular behaviour – a good example of this can be found in fish called Scats. Scats are allocated the genus Scatophagus, from the Greek skatos meaning “dung” or “faeces” and phagein meaning “to eat”, after their penchant for eating monkey excrement that falls into the water We have looked at several examples of how the scientific names have Latin/ancient Greek origins. It should be mentioned, however, that all taxa names -- not just species names -- have their roots in Greek or Latin and can also be roughly translated into English. Returning to our White shark scheme, for example, the order Lamniformes can be broken down into Lamni- (from the Greek lamna meaning “voracious fish”) and –formes (from the Latin forma, meaning “shape”) so that the sharks in this order are all of “voracious fish shape”. At this point, you might be wondering why we bother with Latin/Greek names? Why not just use English or any other of the 6,912 globally recognized languages? Latin was once widely used among Renaissance scholars throughout what is now Europe, allowing people in one country to effectively communicate with someone else in another country (much like English does today). Latin and ancient Greek, however, are both considered ‘dead’ languages, which means that they’re no longer learnt as a native tongue and are thus no longer evolving. To put it another way, the Latin word forma means “shape” today and meant “shape” a century ago – as you can imagine, this is not the case for many of the languages in use today, especially English. Above the species level, most taxa have standardized (formal and informal) suffixes, which helps to clarify their position. For example, almost all families are formed by adding the ending -idae (animals) or -aceae (plants) to the stem of the genus name – e.g. a major genus within the dog family (Canidae) is Canis. The informal suffix for families is usually –id; this makes the informal for the Canidae simply Canid. Similar rules work for subfamilies (-ine and -inane), but it is rather more complicated for orders. Speaking and writing Latin names can often appear rather daunting, especially when it comes to trying to pronounce them. For example, take Acrocephalus schoenobaenus (the Latin name of a bird called the Sedge warbler), Plectrophenax nivalis (Snow bunting, another small bird) or Mertensiella luschani (a salamander from the Aegean). The names get longer if we consider other groups, such as the dinosaurs: Archaeotherium, Carcharodontosaurus, Parasaurolophus, etc. How would you go about pronouncing those? The best advice is to break down and ‘sound out’ the words. So, for example, our sedge warbler would be broken down something like: Acro-ceph-alus scho-en-o-bae-nus. In Latin most of the vowels are short, rather than long, and ch is pronounced as a “k”, ae as “ee”, and ph as “f”. So, if you ‘sound out’ the aforementioned name it would be: Acro-cef-a-lus skoo-en-o-bee-nus. A little practice and you’ll soon pick it up! One final point to remember is that not everyone agrees exactly how Latin or ancient Greek words should be pronounced (George Hempl wrote about this at some length in 1898), so don’t be surprised if you hear others ‘correcting’ you, or pronouncing them differently (don’t take it personally!). When it comes to writing Latin names there are a couple of rules that should be followed. The first is that the genus is always capitalized (i.e. begins with a capital letter), while the specific name is not. So, in the case of our sedge warbler, the Latin name should always be Acrocephalus schoenobaenus and not Acrocephalus Schoenobaenus or acrocephalus schoenobaenus. Also, note that the scientific name should be italicized wherever possible and underlined where italics are not available (such as in handwritten documents). Finally, only the genus and species epithets should be italicized/underlined – the kingdom, phylum, order, family etc., should not be in italics (despite having the same Greek/Latin origin). Regulation of scientific names The ultimate goal of binomial nomenclature -- nomenclature being a set, or system, of names or terms -- is to remove the confusion that vernacular (common) names sometimes cause. Remember back to our example of the Peregrine falcon (known in the USA as the Duck hawk). Despite having two (indeed, several) common names, it only has one Latin name: Falco peregrinus (falco is Latin for “hawk”, while peregrinus is Latin for “wandering”). If you were to write “I saw a peregrine (Falco peregrinus) today” it should leave people (both in the UK and in the USA) in little doubt which bird you’re talking about! So, with this in mind, it becomes apparent that Latin names only work if each species has one -- and only one -- binomen. This is indeed the case and no two species can have the same scientific name – or, more specifically, the same species epithet. The task of governing the system for ensuring that every animal has a unique and universally accepted scientific (binomial) name falls to the International Commission on Zoological Nomenclature (ICZN). Founded in 1895, the ICZN now has 28 members spread across 20 countries and sees some 2000 new generic and 15,000 new specific names added to (or restored in) the zoological literature each year. The ICZN has the final say on whether or not a proposed scientific name should be uniformly accepted by the zoological community. Opinions of the ICZN are published in their own quarterly journal, the Bulletin of Zoological Nomenclature. The scientific names of all plants and fungi are regulated by two primary codes -- The International Code of Botanical Names and The International Code of Nomenclature of Cultivated Plants (published by the International Botanical Congress – IBC) -- while the naming of bacteria is mitigated under the International Code of Nomenclature of Bacteria (published by the International Committee on Systematics of Prokaryotes – ICSP). The classification of viruses is currently slightly different to other groups, but is overseen by the International Committee on Taxonomy of Viruses (ICTV). Overall, in order for a species to be accepted as distinct from any other, a formal description of it must be published in the scientific literature and a “type” (representative) specimen must be preserved (in a museum or university) so it can be used as a standard by which to compare other specimens. When considering which names to attribute to a species, it is the oldest valid (published) name that has priority – it is the overseeing authority’s (i.e. ICZN, IBC, ICTV or ICSP) job settle any nomenclatorial disputes. Cladistics: Ancestry, shared features and the task of classification The object of any good biological taxonomic system is that it represents what we currently know of the evolutionary relationships of its subjects. Most taxonomic schemes arrange organisms in terms of the shared characteristics that they possess: probably the most popular way of doing this is with cladistics (from the Greek klados, meaning “branch” or “rank”). The basic objective of cladistics is to provide a scheme showing the most likely evolutionary pathway for a given group or species based on the characters that it shares with its relatives. The premise behind cladistics is delightfully simple: if the feature that you’re looking at is present in two different organisms then it is likely to have been inherited from their most recent common ancestor. That said, as the late elasmobranch biologist Aidan Martin noted in his article on the subject: not all features are equally useful when looking at ancestry. Features that abound among different organisms are retained because they suit a purpose, even though their owners may since have diverged from the common ancestor (Mr Martin referred to these as “evolutionary hangers on”). In the article, Aidan wrote: “… a two-opening gut (with a mouth at one end and a cloaca or anus at the other) is an ancestral character. Both you and a cockroach have a two-opening gut, but you would probably take offence if I were to suggest that you and a cockroach were closely related …” In effect, with cladistics we are looking for modifications of long-running characteristics; variations to a theme, if you like. Consequently, in order to undertake a cladistic analysis we must translate whatever it is we observe into discrete characters. The ability to translate traits into discrete units (i.e. present or absent; no in-betweens!) means that cladistics lends itself well to computer analysis. The language of taxonomy can be a little confusing and I will gloss over most of the terminology as it doesn’t concern us here. There are, nonetheless, a few ‘central terms’ that crop up a lot. When it comes to looking at traits, there are two main types: homoplasic and homologous. Homoplasic (not to be confused with homoplastic!) traits are those that bear no relationship to the relatedness of two individuals – they have remained because they suit the environment in which their owner lives. So, for example, sharks and dolphins share a similar body form -- i.e. fusiform (torpedo-shaped) body, with similar-looking fin arrangements (above, left) -- because this is best suited to an aquatic lifestyle; they’re not closely related (this is called “Convergent Evolution”). When taxonomists use the term “homology”, they’re talking about a similarity of traits in two or more species (or groups) that’s a result of them sharing a common ancestor at some time in the past. When thinking about homologies, there are two basic character ‘states’: “plesiomorphic” and “apomorphic” (or “derived”). When you’re comparing two organisms, they will invariably exhibit characters that are shared widely with other groups or species (these are the plesiomorphic, or “ancestral” traits) and others that are unique to them or their group (these are the apomorphic/derived traits). It is sometimes said that plesiomorphic/ancestral characters are “primitive”, while apomorphic/derived traits are “advanced” – most taxonomists shy away from these terms because they are easily misinterpreted. So, a trait that is present in lots of different species or groups (such as the twin-opening gut) is plesiomorphic and doesn’t give us any clues as to our species’ ancestry. Conversely, those features that are present only in an ancestor and its descendants are apomorphic and can be used to assess taxonomic relationships. Characters that are unique to a species (i.e. have arisen within the species and aren’t present in any ancestors) are referred to as “autapomorphic”. It is important to recognise that all these terms are relative; a character can be an apomorphy at one branch of your tree, but plesiomorphic at another. Feathers, for example, characterize (i.e. is apomorphic for) the group we call Aves (birds), but is plesiomorphic to peregrine falcons - in other words, feathers can be used to define the Aves, but not to define peregrine falcons (because all other birds have feathers, so it's not taxonomically unique to this species). In the above cladogram, I've used coloured dots to represent characters or traits present in a group of species. From the above we can see that dark blue dots indicate a synapomorphy because it arose in Species B and is shared by all of its descendants. Conversely, the pale blue dots represent a plesiomorphic trait because it is present in Species A but only some of its descendants (it's missing in F, G and I). Traits that have arisen in a species and are unique to that species are called autapomorphies. Species D and E share more traits in common (i.e. more coloured dots) than any other pair, making them sister species. If we take Species B, D, E, F and H we have an ancestor (B) and all of its descendants we have a clade - or, to put it another way, Group 1 is monophyletic. If we extend the red box to the left so that it includes Species A, but still leaves out C, G and I, then the group would be paraphyletic - in other words, the group contains an ancestor and some of its descendants. When a character is present in two (or more) species and originated in their most recent common ancestor, the feature is called a “synapomorphy”. Finally, a character shared by a number of groups or species having originated in a distant ancestor (i.e. older than the most recent common one) is referred to as “symplesiomorphic”. When you have a group that, based on synapomorphies, contains the common ancestor and all species descended from it you have what taxonomists refer to as a “monophyletic” (meaning “one race”) group – these are also sometimes referred to as “natural groups” or “clades”. The opposite of this -- where you have a group that contains an ancestor and some of its descendants -- is a “paraphyletic” (“near race”) group. A third option is the “polyphyletic” group, which is based on homoplasy and doesn’t contain a common ancestor. So, in order to build our scheme, we need to identify the organisms in which novel characteristics first crop up (taxonomists call these “branching points”). You start out with a group of species and some data (genetic, anatomical, even behavioural) that characterizes them; you choose your characters/features and then you ‘weigh’ them in terms of how important you consider them to be (this is perhaps the most contentious step in the process and different taxonomists frequently disagree on which characters should be used and how important they are). Finally, you organize your subjects into groups on the basis of how many synapomorphies they possess. The end result is a graph (called a “cladogram”) that represents the distribution of the characters; from this we can start to establish possible evolutionary relationships. Ultimately, the more synapomorphies there are among two species or groups, the more recently they shared a common ancestor and thus the more closely related they are likely to be. If you find all of these groupings and terms mind boggling (and you're not alone!) just remember that there is a difference between describing something and defining it. Although the terms may appear superficially similar, they are actually crucially different and it's the difference that underpins our cladistic grouping. Returning to our peregrine falcon example, you might describe it (see photo above) as a medium-sized predatory bird with a mottled brown-to-grey back, white belly, flecked with brown and a bright yellow base to its beak. While someone else might be able to identify a peregrine based on this, does this really define what a peregrine is? The answer is no; there are several raptors with similar body colouration, and bright yellow bill-bases. So, in order to define what makes a peregrine a peregrine, we have to think about those features unique to it - those that aren't shared by any other creature. Only then can we say that the bird is a peregrine and not, say, a hobby (Falco subbuteo). Displaying taxonomic relationships graphically Diagrams represent a convenient method of expressing relatedness – in the case of taxonomic relationships they generally take the form of either a cladogram or a “phylogenetic tree”. Often, the terms cladogram and tree are used interchangeably -- not least because they share the same basic appearance -- but some taxonomists argue that they aren’t the same things at all. Effectively, whether you consider cladograms types of trees or not, the main difference between the two is that a cladogram doesn’t make a statement about evolutionary pathways (a tree does); instead, all it shows is the distribution of your chosen characters Cladograms A cladogram is a branched diagram that shows patterns of relatedness; they look similar to a family tree turned on its side (sometimes you’ll see it displayed vertically) and are read left-to-right (or bottom to top). In the example below, A represents the common ancestor of B, C & D. If you group A, B, C and D together they form a monophyletic clade (i.e. the group contains all descendants of a common ancestor). B and C share more synapomorphies than either species does with D, making them “sister taxa” (i.e. they are more closely related to each other than anything else). In terms of descriptive terminology for cladograms, the first line (connecting A to the main graph) is referred to as the “trunk” (of the tree) and each point where the line splits in two is called a node; the lines themselves are referred to as “lineages”. You could be forgiven for thinking that, looking at the above, cladograms infer evolutionary relationships: surely the example above implies that B, C and D evolved from A? Well, actually no! In most cases, there are many different ‘pathways’ that can lead to an observed pattern of relatedness (e.g. convergence); the fact that A and B share a character doesn’t mean that B necessarily inherited it from A. All we’re seeing above is the probability of relationship – in other words, how likely it is that B and C are more closely related to each other than to a third party (D). By this point, if you’re still with me, you may have noticed that if cladograms are created on the basis of the chosen characters and their ‘weighting’ (i.e. importance), then changing the weighting would result in a different graph being produced. You’d be correct. Consequently, taxonomists divide (here we go again!) cladograms into two groups: those that require only the minimum number of ‘steps’ -- i.e. gains, losses or modifications of a character -- necessary to explain the distribution of a character (these are the “parsimonious” or “optimal” cladograms) and those that require more steps (the “suboptimal” cladograms). In essence, the most parsimonious cladogram is the simplest, having the fewest ‘steps’ in it. The potential for different characters and weighting to alter the end result, however, means that the most parsimonious graph is not necessarily always the best choice. In the end, only when several analyses using different sets of data point in the same direction can you be relatively sure that any resulting tree paints an accurate picture of the evolution of your chosen group or species. Phylogenetic Trees Phylogenetic trees are branching diagrams -- possibly a type of cladogram, depending on your view! -- that represent possible evolutionary pathways. The trees have branches, the length of which is proportional to the predicted (or hypothesised) time between the divergence of the organisms, groups or sequences (depending what you’re looking at). The diagram on the left shows a basic cladogram, while that on the right presents one of 12 possible phylogenetic trees that can be built from the cladogram data. The graduated bar next to the tree can have various units, including time and base pairs (for genetic divergence). X and Z represent additional (possibly yet-to-be-discovered) species. The example above shows a cladogram (left) and one of the 12 possible phylogenetic trees that can be generated based on it. The cladogram shows that the lizard and salmon share more inherited traits (synapomorphies) than either does with the shark or lamprey – as a group, the lizard and salmon have more in common with the shark than they do with the lamprey. The tree suggests that a hypothetical ancestor (Z) gave rise to the lamprey and to the shark; the scheme then goes on to imply that a hypothetical descendent of the shark (X) gave rise to the salmon and the lizard. The bar down the left-hand side of the tree signifies when this is hypothesised to have happened (usually based on molecular data). The origin of species Following our trees to the end (their so-called “terminal taxa”) leaves us with that which we call a “species”; but what is a species, exactly? This is perhaps one of the most contentious questions in taxonomy. You’ve probably heard the term “species” used with an air of certainty, but we still don’t have an infallible definition of what makes something a species. The problem lies largely in our attempt to, as Charles Darwin put it, “define the indefinable”. The processes of evolution and speciation (the formation of new species) are continuous ones, which make it difficult to group the results – this explains why there are currently some 26 proposed definitions (concepts) of what a species is. Perhaps the most well known definition is the Biological Species Concept (BSC). The biological species concept proposes that two individuals (or groups) should be considered distinct species if they are no longer able to mate with each other and produce fertile offspring. To put is another way, under the BSC a species is a group of individuals that freely interbreed with each other under ‘natural conditions’ (another sticking point!) to produce offspring that can reproduce for themselves. Some argue that this definition is weakened by animals such as ligers and tigrons (lion and tiger hybrids). If a male lion mates with a female tiger, the resulting liger can be fertile; however, male ligers are sterile and so further liger-liger matings couldn’t result in fertilization (although a female liger was successfully mated with a male lion). Arguably, such cases could be overlooked because the two species are allopatric (i.e. they don’t live in the same regions), so matings in the wild are very unlikely to occur – none the less, there are reports of female tigers mating with lions. Similarly, an ass (horse-donkey hybrid) can sometimes be fertile as can some other hybrids. The bigger problem with the BSC is what to do with animals like sponges, planarians and echinoderms that don’t reproduce sexually (the asexual species). Despite these issues, it is fair to say that the BSC works well for most animals. In a bid to address some of the gaps in the BSC, many other species concepts have been proposed: there are currently about 26 different published concepts! Each concept tries to provide an all-inclusive definition of what it means to be a species, but none are without their problems. In terms of practicality, some biologists lean towards the General Lineage Concept (GLC). The GLC states that as different lineages evolve and diverge their genotype (genetic make-up) and phenotype (physical appearance) change to the point where, eventually, you can assign an animal to one species or the other. So, in essence the GLC and BSC aren’t all that different. The GLC is saying that species are lineages that retain their integrity -- with respect to other lineages -- over time and space (i.e. they don’t merge -- interbreed -- with each other), while the BSC states that species form when populations become reproductively isolated from each other. The advent of molecular and genetic techniques has greatly enhanced our ability to assess what constitutes a species and untangle how that species fits in next to all the others. Molecular and genetic typing has seen to it that we are no longer restricted to basing our interpretations simply on how an organism looks. Consequently, perhaps the biggest ‘rival’ to the BSC is now the Phylogenetic Species Concept (PSC), which does away with sex altogether. The PSC centres on monophyly; it states that related organisms share characters because they share a common ancestor. You start with large groups and (based on synapomorphies – sensu Niles Elredge and Joel Cracraft) split them up into ever smaller ones until you arrive at a group that can be split no further: according to the PSC, this is a species. Some critics argue that the PSC leads to an ‘over splitting’ of species, although as Carl Zimmer points out in his article, many think that we should just go where the data lead us rather than worrying about the number of species we end up with. In the end, it seems that the best option is to consider as many lines of evidence as possible (ideally incorporating genetic data) when considering whether the critter you’re looking at is a species in its own right. When we consider behavioural, genetic and ecological evidence, some argue that we are in a good position to classify even the most difficult of organisms: the microbes. The jury is still very much out on the best way to proceed when it comes to defining a species, but the molecular and genetic tools at our disposal will no doubt play an increasingly large roll in subsequent hypotheses. Moving the goalposts Those who do their best to follow the rather tumultuous world of taxonomy can often become confused and frustrated when species are re-classified; especially if this happens several times in a short period. A good example of this is the taxonomic history of the Sandtiger shark (Carcharias taurus), which Aidan Martin reviewed in an article on his site. The point to remember is that organisms aren’t re-classified capriciously or whimsically – any reassignments come about as a result of new evidence. Hopefully, as Science forges ahead it will allow taxonomists to get a better handle on the interrelationships of plants, fungi, animals and microorganisms and changes, while almost inevitable, will occur less frequently. In the meantime, as Aidan put it: “Nature is messy; Science is tentative; as long as these truths remain relevant to biological research, scientific names will continue to be revised."
Protozoa
Which part of the brain regulates physiological stability in the body?
HOW AND WHY WE CLASSIFY LIVING ORGANISMS Content Updated: 20th July 2008 Generally-speaking, we humans have a desire to label and categorize things – hands up those who keep their t-shirts in a separate drawer to their underwear and/or arrange them in order of most recently worn or colour. Coupled with our desire for order (some teenagers excluded!), is an equally strong desire to name things we’ve sorted. The great Chinese thinker and philosopher K'ung Fu Tzu (better known by his Latin name: Confucius) is widely credited as being the source of the old Chinese proverb: “The beginning of wisdom is to call things by their right name.” But, what’s the point of naming things? Why go to the hassle of trying to give every novel object its own name? We name objects because it makes our life easier. Let’s say you’re sitting on the sofa and you want your friend to pass the remote so you can see what else is on the TV – this process is rather difficult without names. A request like, “Please pass the thing on the thingy. I want to see what’s on the whats-am-a-jig”, is likely to meet with confusion. The request is easier for the other person to follow if things have names: “Please pass the remote on the coffee table. I want to see what’s on the TV”. Now, it’s true that you might be able to gesticulate at your friend until he or she either gets the idea, or misinterprets and takes offence, but what if you can’t see the person you need help from – charades doesn’t help then. Imagine that you’re sitting on the train going to work when you remember you forgot to get the pie out of the freezer to defrost in time for dinner; fortunately your partner has the day off and is at home. So, you phone up and ask “Can you get the thing out of the thingy so it’s thingy-ed in time for what’s-its-name?” Again, confusion reigns. Gesticulating won’t help because the other person can’t see you (although it might make a dull train ride more interesting for your fellow passengers!). The instructions can be followed when we insert the names: “Can you get the pie out of the freezer so that it’s defrosted in time for dinner?” So, the act of naming is a matter of convenience – whether the objects are pieces or furniture, bits of machinery, or animals we assign them names because it makes life a heck of a lot easier for us. We, for example, call a ‘fish’ with a cartilaginous skeleton and between five and seven pairs of gills a “shark”. This allows us to tell another person what animal we’re looking at or talking about. The use of a name certainly helps, but not without problems. Telling someone that you went diving with sharks while on holiday is kinda like saying you went out for dinner with some primates; it’s not quite as specific as we might want because there are lots of different ‘types’ of primates (and sharks). Consequently, to make our meaning as clear as possible, objects (be they animals, plants, bacteria, furniture, tools, etc.) are split into as narrow groups as possible and each group is given a name. So, for example, the group of ‘fish’ we call sharks gets further split up into different types of sharks based largely on how they look (their “morphology”), both internally (i.e. their skeleton, internal organs etc.) and externally (i.e. fins, gills, skin, colour etc.). Large groups are then split into smaller (i.e. more specific) ones and so on down the line until you have a group containing all the animals considered to be exactly the same in terms of the features we’re looking at (these can be morphological, genetic, ecological, biochemical, even behavioural): this is the species level (we’ll look at this in more detail later). Humans, chimpanzees, great white sharks, blackbirds, palmate newts and red squirrels are all examples of species. Some taxonomists opt to take the splitting below the species level and group animals into subspecies, infraspecies and forms (among others). Perhaps the extreme of this splitting is found in the human species, where every individual of the species is given his/her own name at birth. The problem is that this gets very complicated very quickly as the list of viable names soon runs out and leads to the confusing situation of several individuals with the same name – think how confusing it can be if there are two or three people in the office with the same name. Consequently, the branch of Science known as “Taxonomy” (from the Greek word taxis, meaning “order” or “rank” and –nomia, meaning “law”) is largely concerned with the grouping of organisms down to the species level. This process of giving each species a name is all well and good (it certainly makes it easier to be precise in our communications), but there’s a snag. In order for the system to work, everyone must call that “something” by the same (universally agreed) name – if the process isn’t regulated we can run into problems. Such problems are rife with “common names”. Here in the UK, we have an awesome bird of prey called a Peregrine falcon (the fastest bird in the world, clocked at speeds of 87mph / 140kmph during a dive - left). In North America, however, the same bird is more commonly known as the Duck hawk, after its impressive ability to nab ducks in mid-air. Anyone who wasn’t aware of this ‘double identity’ could reasonably assume that we were talking about two different species. The problem gets exponentially more complicated when local names, different languages and different dialects are taken into account. So, how do we get around this? Well, we do it by giving most species known to Science two names: a vernacular (common) and a scientific (often referred to as Latin, but more accurately a Latinized-Greek) one. While it’s true that not all species have a vernacular name (e.g. many bacteria, mosses, lichens etc.), this isn’t a major issue because it is the Latin name that’s the important one; it’s designed to remove confusion caused by dual identities. I will return to our falcon example shortly, but first let’s take a brief look at how we arrived at the classification system we embrace today and how we use it to assign animals a unique Latin name. Birth of the binomen Carl von Linné (also variously referred to as Carl Linnaeus, Carolus Linnaeus and, more colloquially, the ‘Father of Taxonomy’), is largely responsible for the way we classify creatures today. Linné was born in Sweden during May of 1707, and transferred from a study of medicine to a study of plants in 1728. In 1735, he returned to his study of medicine, completing it before going on to publish the first edition of his classification of living things (titled Systema Naturae), in which he listed all types of animals that he knew of. Systema Naturae began life as a small pamphlet but, by 1758 -- when the tenth edition was published -- it had become a multi-volume opus. Not only did Linnaeus list all the animals he knew about, he also grouped them according to his own hierarchical scheme of perceived relatedness (i.e. how similar they looked to one another). Despite some controversial aspects, Linnaeus’ scheme has proven to be robust and much of it remains to this day. The system comprises a series of levels, or categories, called taxa (singular being taxon) and assigns each species a binominal name. All scientific names ascribed to species are initially binomial (i.e. they are composed of two parts), consisting of a generic (i.e. genus-related) and a specific (i.e. species-related) epithet – where further splitting occurs, the organism may be assigned a trinomial (three-part) name, to show that it’s a subspecies. In the standard taxonomic hierarchy, there are seven taxa, with the species name sitting at the lowest (referred to as the “basic”) level. In other words, (subspecies notwithstanding) the species name represents the narrowest grouping. While seven taxa are pretty standard in a classification scheme, the total number can be higher – the largest I’ve seen has 76! The number of taxa (and the names ascribed to them) can also vary according to whether the species you’re classifying is an animal, plant, bacteria etc. Nonetheless, regardless of the number of taxa sitting above it, the species level is the only one that can truly be considered “natural”, because everything above it is largely subjective – different taxonomists may place a given species in different taxa, but the species epithet will remain the same. One for all and all for one Each taxonomist generally has his or her own ideas about how animals and plants are related to each other, and few ever agree on a single (universal) taxonomic scheme for anything. Fortunately, this is not an insurmountable issue; these fanciful Latin/Greek names are constructs of our own convenience (serving to satiate our desire for grouping things) and have no relevance in the wider world of Nature. After all, whether I file my ELO CD under ‘Rock’ or ‘Pop’ doesn’t change the CD itself, any more than choosing to classify a duck within the Tubulinida (the class containing the amoebas) makes it less of a bird and more of a protozoa. The scheme we use just represents how closely related we think the critter in question is to other critters assigned to different species. As a result, no single taxonomic scheme is inherently ‘better’ than another. All that really matters is that the resulting scheme best fits the evidence you have. Having read this far, you might be wondering what the point of classifying animals is if it has little relevance in Nature. Well, classification is essential when it comes to drawing up protection for species. In a fascinating article to Scientific American (June 2008), science writer Carl Zimmer provides a nice example of this involving wolves. In the southern USA, there is a considerable conservation effort to save the Red wolf, which is considered a separate species from the wolves in Canada and the eastern USA. Some scientists, however, argue that the red wolf is just an isolated population of the Canadian species, which -- if true -- means that the US government hasn’t actually been saving a species from extinction, because thousands still reside just across the border. In his article Mr. Zimmer also notes that proper classification of microbes could allow public health workers to anticipate outbreaks of disease and prepare a response. So, the point here is that taxonomy is about more than just scientists arguing over which scheme they think best suits a given species; it has deep roots in our understanding of species relationships and in the protection of the natural world. Anyway, enough with the preamble – how do we actually put plants, animals and microbes into these groups? Taxonomic levels I have already mentioned that there are at least 76 taxonomic levels that one could use to build a detailed classification. Seven taxa are, however, usually sufficient, and they are: Kingdom; Phylum (from the Greek phulon, meaning “race”); Class; Order; Family; Genus; and Species. Precisely how these are defined and allocated varies according to the type of organism (animal, plant, bacteria etc.) you’re trying to classify. In Linnaeus’ original scheme, objects were grouped into one of three Kingdoms: Animalia (animals); Vegetabilia (plants); or Mineralia (minerals) – hence the familiar “animal, vegetable or mineral?” expression. As our knowledge of the natural world grew (and keeps growing) taxonomists found that these three kingdoms weren’t sufficient to do justice to the enormous diversity of life on Earth. We now recognize six kingdoms: Plantae (plants), Animalia (animals), Fungi (fungi and moulds), Eubacteria (the bacteria – sometimes called Monera); Archaea (microbes similar to bacteria); and the Protista (something of a dumping ground for all multi-cellular organisms that don’t fit into any of the aforementioned groups – sometimes called Protoctista). Despite some quite apparent differences between the two, a few textbooks merge the Eubacteria and Archaea into a single kingdom: the Prokaryota. Depending on the scheme you choose to follow (and they’re changing all the time!), the kingdoms break down roughly as follows: * Plantae is divided into about 12 phyla and comprise about 270,000 species. * Animalia is split into about 33 phyla and contains about 800,000 species (although this is probably a drastic underestimate of the true figure). * Fungi have five phyla and about 100,000 species. * Eubacteria have three phyla and a number of species that is difficult even to estimate – some authors suggest 1,000,000,000 (a billion) but even this could be a considerable underestimate! * Archaea are poorly known and there are currently three main (and five tentative) phyla that have been created based largely on laboratory cultures (estimates of total phyla range from 18 to 23). The most recent list I can find (1999) contains 209 species. * Protista comprise some 20 to 50 phyla and about 23,000+ species. If we dig a little deeper and look at an example of a ‘standard’ classification, we can see how these taxa are arranged. In the structure below I have set out the currently accepted taxonomic scheme for that most infamous of all sharks: the Great White (right). Kingdom: Animalia (mobile critters; have many cells; can’t make their own food) Phylum: Chordata (flexible skeletal rod with accompanying nerves) Class: Chondrichthyes (‘fish’ with a cartilaginous skeleton) Order: Lamniformes (‘Mackerel’ sharks) Genus: Carcharodon (from the Greek carcharos meaning “ragged” or “pointed” and odon meaning “tooth”) Species: carcharias (Greek for “shark”) Working down from the above, the scheme moves from a very broad taxon (i.e. Animalia – thousands of species), to a slightly narrower one (the class containing just the cartilaginous fishes – almost 1,500 species), to a narrower one still (containing only ‘mackerel’ sharks – about 15 species) and so on down to the narrowest one (i.e. species – just one). Origin of a scientific name The scientific name given to an organism is usually based either on a description of it, the region in which the animal/plant is found or the person describing it for the first time (or a combination of these). The name Myotis macrotarsus, for example, was given to a cave-roosting bat from the Philippines and translates roughly to “mouse-eared bat with big feet” – an apt description of the critter! Similarly, in our scheme above, the genus and species epithet combine to form a rather pertinent description of the Great White shark (pointed-toothed shark!). The Giant otter, on the other hand, was first described from Brazil and is given the scientific name Pteronura brasiliensis, while the South African Lantern shark, Etmopterus compagnoi, was named after taxonomist Professor Leonard Compagno (at the South African Museum). In a few instances, an organism may be given a scientific name that illustrates a particular behaviour – a good example of this can be found in fish called Scats. Scats are allocated the genus Scatophagus, from the Greek skatos meaning “dung” or “faeces” and phagein meaning “to eat”, after their penchant for eating monkey excrement that falls into the water We have looked at several examples of how the scientific names have Latin/ancient Greek origins. It should be mentioned, however, that all taxa names -- not just species names -- have their roots in Greek or Latin and can also be roughly translated into English. Returning to our White shark scheme, for example, the order Lamniformes can be broken down into Lamni- (from the Greek lamna meaning “voracious fish”) and –formes (from the Latin forma, meaning “shape”) so that the sharks in this order are all of “voracious fish shape”. At this point, you might be wondering why we bother with Latin/Greek names? Why not just use English or any other of the 6,912 globally recognized languages? Latin was once widely used among Renaissance scholars throughout what is now Europe, allowing people in one country to effectively communicate with someone else in another country (much like English does today). Latin and ancient Greek, however, are both considered ‘dead’ languages, which means that they’re no longer learnt as a native tongue and are thus no longer evolving. To put it another way, the Latin word forma means “shape” today and meant “shape” a century ago – as you can imagine, this is not the case for many of the languages in use today, especially English. Above the species level, most taxa have standardized (formal and informal) suffixes, which helps to clarify their position. For example, almost all families are formed by adding the ending -idae (animals) or -aceae (plants) to the stem of the genus name – e.g. a major genus within the dog family (Canidae) is Canis. The informal suffix for families is usually –id; this makes the informal for the Canidae simply Canid. Similar rules work for subfamilies (-ine and -inane), but it is rather more complicated for orders. Speaking and writing Latin names can often appear rather daunting, especially when it comes to trying to pronounce them. For example, take Acrocephalus schoenobaenus (the Latin name of a bird called the Sedge warbler), Plectrophenax nivalis (Snow bunting, another small bird) or Mertensiella luschani (a salamander from the Aegean). The names get longer if we consider other groups, such as the dinosaurs: Archaeotherium, Carcharodontosaurus, Parasaurolophus, etc. How would you go about pronouncing those? The best advice is to break down and ‘sound out’ the words. So, for example, our sedge warbler would be broken down something like: Acro-ceph-alus scho-en-o-bae-nus. In Latin most of the vowels are short, rather than long, and ch is pronounced as a “k”, ae as “ee”, and ph as “f”. So, if you ‘sound out’ the aforementioned name it would be: Acro-cef-a-lus skoo-en-o-bee-nus. A little practice and you’ll soon pick it up! One final point to remember is that not everyone agrees exactly how Latin or ancient Greek words should be pronounced (George Hempl wrote about this at some length in 1898), so don’t be surprised if you hear others ‘correcting’ you, or pronouncing them differently (don’t take it personally!). When it comes to writing Latin names there are a couple of rules that should be followed. The first is that the genus is always capitalized (i.e. begins with a capital letter), while the specific name is not. So, in the case of our sedge warbler, the Latin name should always be Acrocephalus schoenobaenus and not Acrocephalus Schoenobaenus or acrocephalus schoenobaenus. Also, note that the scientific name should be italicized wherever possible and underlined where italics are not available (such as in handwritten documents). Finally, only the genus and species epithets should be italicized/underlined – the kingdom, phylum, order, family etc., should not be in italics (despite having the same Greek/Latin origin). Regulation of scientific names The ultimate goal of binomial nomenclature -- nomenclature being a set, or system, of names or terms -- is to remove the confusion that vernacular (common) names sometimes cause. Remember back to our example of the Peregrine falcon (known in the USA as the Duck hawk). Despite having two (indeed, several) common names, it only has one Latin name: Falco peregrinus (falco is Latin for “hawk”, while peregrinus is Latin for “wandering”). If you were to write “I saw a peregrine (Falco peregrinus) today” it should leave people (both in the UK and in the USA) in little doubt which bird you’re talking about! So, with this in mind, it becomes apparent that Latin names only work if each species has one -- and only one -- binomen. This is indeed the case and no two species can have the same scientific name – or, more specifically, the same species epithet. The task of governing the system for ensuring that every animal has a unique and universally accepted scientific (binomial) name falls to the International Commission on Zoological Nomenclature (ICZN). Founded in 1895, the ICZN now has 28 members spread across 20 countries and sees some 2000 new generic and 15,000 new specific names added to (or restored in) the zoological literature each year. The ICZN has the final say on whether or not a proposed scientific name should be uniformly accepted by the zoological community. Opinions of the ICZN are published in their own quarterly journal, the Bulletin of Zoological Nomenclature. The scientific names of all plants and fungi are regulated by two primary codes -- The International Code of Botanical Names and The International Code of Nomenclature of Cultivated Plants (published by the International Botanical Congress – IBC) -- while the naming of bacteria is mitigated under the International Code of Nomenclature of Bacteria (published by the International Committee on Systematics of Prokaryotes – ICSP). The classification of viruses is currently slightly different to other groups, but is overseen by the International Committee on Taxonomy of Viruses (ICTV). Overall, in order for a species to be accepted as distinct from any other, a formal description of it must be published in the scientific literature and a “type” (representative) specimen must be preserved (in a museum or university) so it can be used as a standard by which to compare other specimens. When considering which names to attribute to a species, it is the oldest valid (published) name that has priority – it is the overseeing authority’s (i.e. ICZN, IBC, ICTV or ICSP) job settle any nomenclatorial disputes. Cladistics: Ancestry, shared features and the task of classification The object of any good biological taxonomic system is that it represents what we currently know of the evolutionary relationships of its subjects. Most taxonomic schemes arrange organisms in terms of the shared characteristics that they possess: probably the most popular way of doing this is with cladistics (from the Greek klados, meaning “branch” or “rank”). The basic objective of cladistics is to provide a scheme showing the most likely evolutionary pathway for a given group or species based on the characters that it shares with its relatives. The premise behind cladistics is delightfully simple: if the feature that you’re looking at is present in two different organisms then it is likely to have been inherited from their most recent common ancestor. That said, as the late elasmobranch biologist Aidan Martin noted in his article on the subject: not all features are equally useful when looking at ancestry. Features that abound among different organisms are retained because they suit a purpose, even though their owners may since have diverged from the common ancestor (Mr Martin referred to these as “evolutionary hangers on”). In the article, Aidan wrote: “… a two-opening gut (with a mouth at one end and a cloaca or anus at the other) is an ancestral character. Both you and a cockroach have a two-opening gut, but you would probably take offence if I were to suggest that you and a cockroach were closely related …” In effect, with cladistics we are looking for modifications of long-running characteristics; variations to a theme, if you like. Consequently, in order to undertake a cladistic analysis we must translate whatever it is we observe into discrete characters. The ability to translate traits into discrete units (i.e. present or absent; no in-betweens!) means that cladistics lends itself well to computer analysis. The language of taxonomy can be a little confusing and I will gloss over most of the terminology as it doesn’t concern us here. There are, nonetheless, a few ‘central terms’ that crop up a lot. When it comes to looking at traits, there are two main types: homoplasic and homologous. Homoplasic (not to be confused with homoplastic!) traits are those that bear no relationship to the relatedness of two individuals – they have remained because they suit the environment in which their owner lives. So, for example, sharks and dolphins share a similar body form -- i.e. fusiform (torpedo-shaped) body, with similar-looking fin arrangements (above, left) -- because this is best suited to an aquatic lifestyle; they’re not closely related (this is called “Convergent Evolution”). When taxonomists use the term “homology”, they’re talking about a similarity of traits in two or more species (or groups) that’s a result of them sharing a common ancestor at some time in the past. When thinking about homologies, there are two basic character ‘states’: “plesiomorphic” and “apomorphic” (or “derived”). When you’re comparing two organisms, they will invariably exhibit characters that are shared widely with other groups or species (these are the plesiomorphic, or “ancestral” traits) and others that are unique to them or their group (these are the apomorphic/derived traits). It is sometimes said that plesiomorphic/ancestral characters are “primitive”, while apomorphic/derived traits are “advanced” – most taxonomists shy away from these terms because they are easily misinterpreted. So, a trait that is present in lots of different species or groups (such as the twin-opening gut) is plesiomorphic and doesn’t give us any clues as to our species’ ancestry. Conversely, those features that are present only in an ancestor and its descendants are apomorphic and can be used to assess taxonomic relationships. Characters that are unique to a species (i.e. have arisen within the species and aren’t present in any ancestors) are referred to as “autapomorphic”. It is important to recognise that all these terms are relative; a character can be an apomorphy at one branch of your tree, but plesiomorphic at another. Feathers, for example, characterize (i.e. is apomorphic for) the group we call Aves (birds), but is plesiomorphic to peregrine falcons - in other words, feathers can be used to define the Aves, but not to define peregrine falcons (because all other birds have feathers, so it's not taxonomically unique to this species). In the above cladogram, I've used coloured dots to represent characters or traits present in a group of species. From the above we can see that dark blue dots indicate a synapomorphy because it arose in Species B and is shared by all of its descendants. Conversely, the pale blue dots represent a plesiomorphic trait because it is present in Species A but only some of its descendants (it's missing in F, G and I). Traits that have arisen in a species and are unique to that species are called autapomorphies. Species D and E share more traits in common (i.e. more coloured dots) than any other pair, making them sister species. If we take Species B, D, E, F and H we have an ancestor (B) and all of its descendants we have a clade - or, to put it another way, Group 1 is monophyletic. If we extend the red box to the left so that it includes Species A, but still leaves out C, G and I, then the group would be paraphyletic - in other words, the group contains an ancestor and some of its descendants. When a character is present in two (or more) species and originated in their most recent common ancestor, the feature is called a “synapomorphy”. Finally, a character shared by a number of groups or species having originated in a distant ancestor (i.e. older than the most recent common one) is referred to as “symplesiomorphic”. When you have a group that, based on synapomorphies, contains the common ancestor and all species descended from it you have what taxonomists refer to as a “monophyletic” (meaning “one race”) group – these are also sometimes referred to as “natural groups” or “clades”. The opposite of this -- where you have a group that contains an ancestor and some of its descendants -- is a “paraphyletic” (“near race”) group. A third option is the “polyphyletic” group, which is based on homoplasy and doesn’t contain a common ancestor. So, in order to build our scheme, we need to identify the organisms in which novel characteristics first crop up (taxonomists call these “branching points”). You start out with a group of species and some data (genetic, anatomical, even behavioural) that characterizes them; you choose your characters/features and then you ‘weigh’ them in terms of how important you consider them to be (this is perhaps the most contentious step in the process and different taxonomists frequently disagree on which characters should be used and how important they are). Finally, you organize your subjects into groups on the basis of how many synapomorphies they possess. The end result is a graph (called a “cladogram”) that represents the distribution of the characters; from this we can start to establish possible evolutionary relationships. Ultimately, the more synapomorphies there are among two species or groups, the more recently they shared a common ancestor and thus the more closely related they are likely to be. If you find all of these groupings and terms mind boggling (and you're not alone!) just remember that there is a difference between describing something and defining it. Although the terms may appear superficially similar, they are actually crucially different and it's the difference that underpins our cladistic grouping. Returning to our peregrine falcon example, you might describe it (see photo above) as a medium-sized predatory bird with a mottled brown-to-grey back, white belly, flecked with brown and a bright yellow base to its beak. While someone else might be able to identify a peregrine based on this, does this really define what a peregrine is? The answer is no; there are several raptors with similar body colouration, and bright yellow bill-bases. So, in order to define what makes a peregrine a peregrine, we have to think about those features unique to it - those that aren't shared by any other creature. Only then can we say that the bird is a peregrine and not, say, a hobby (Falco subbuteo). Displaying taxonomic relationships graphically Diagrams represent a convenient method of expressing relatedness – in the case of taxonomic relationships they generally take the form of either a cladogram or a “phylogenetic tree”. Often, the terms cladogram and tree are used interchangeably -- not least because they share the same basic appearance -- but some taxonomists argue that they aren’t the same things at all. Effectively, whether you consider cladograms types of trees or not, the main difference between the two is that a cladogram doesn’t make a statement about evolutionary pathways (a tree does); instead, all it shows is the distribution of your chosen characters Cladograms A cladogram is a branched diagram that shows patterns of relatedness; they look similar to a family tree turned on its side (sometimes you’ll see it displayed vertically) and are read left-to-right (or bottom to top). In the example below, A represents the common ancestor of B, C & D. If you group A, B, C and D together they form a monophyletic clade (i.e. the group contains all descendants of a common ancestor). B and C share more synapomorphies than either species does with D, making them “sister taxa” (i.e. they are more closely related to each other than anything else). In terms of descriptive terminology for cladograms, the first line (connecting A to the main graph) is referred to as the “trunk” (of the tree) and each point where the line splits in two is called a node; the lines themselves are referred to as “lineages”. You could be forgiven for thinking that, looking at the above, cladograms infer evolutionary relationships: surely the example above implies that B, C and D evolved from A? Well, actually no! In most cases, there are many different ‘pathways’ that can lead to an observed pattern of relatedness (e.g. convergence); the fact that A and B share a character doesn’t mean that B necessarily inherited it from A. All we’re seeing above is the probability of relationship – in other words, how likely it is that B and C are more closely related to each other than to a third party (D). By this point, if you’re still with me, you may have noticed that if cladograms are created on the basis of the chosen characters and their ‘weighting’ (i.e. importance), then changing the weighting would result in a different graph being produced. You’d be correct. Consequently, taxonomists divide (here we go again!) cladograms into two groups: those that require only the minimum number of ‘steps’ -- i.e. gains, losses or modifications of a character -- necessary to explain the distribution of a character (these are the “parsimonious” or “optimal” cladograms) and those that require more steps (the “suboptimal” cladograms). In essence, the most parsimonious cladogram is the simplest, having the fewest ‘steps’ in it. The potential for different characters and weighting to alter the end result, however, means that the most parsimonious graph is not necessarily always the best choice. In the end, only when several analyses using different sets of data point in the same direction can you be relatively sure that any resulting tree paints an accurate picture of the evolution of your chosen group or species. Phylogenetic Trees Phylogenetic trees are branching diagrams -- possibly a type of cladogram, depending on your view! -- that represent possible evolutionary pathways. The trees have branches, the length of which is proportional to the predicted (or hypothesised) time between the divergence of the organisms, groups or sequences (depending what you’re looking at). The diagram on the left shows a basic cladogram, while that on the right presents one of 12 possible phylogenetic trees that can be built from the cladogram data. The graduated bar next to the tree can have various units, including time and base pairs (for genetic divergence). X and Z represent additional (possibly yet-to-be-discovered) species. The example above shows a cladogram (left) and one of the 12 possible phylogenetic trees that can be generated based on it. The cladogram shows that the lizard and salmon share more inherited traits (synapomorphies) than either does with the shark or lamprey – as a group, the lizard and salmon have more in common with the shark than they do with the lamprey. The tree suggests that a hypothetical ancestor (Z) gave rise to the lamprey and to the shark; the scheme then goes on to imply that a hypothetical descendent of the shark (X) gave rise to the salmon and the lizard. The bar down the left-hand side of the tree signifies when this is hypothesised to have happened (usually based on molecular data). The origin of species Following our trees to the end (their so-called “terminal taxa”) leaves us with that which we call a “species”; but what is a species, exactly? This is perhaps one of the most contentious questions in taxonomy. You’ve probably heard the term “species” used with an air of certainty, but we still don’t have an infallible definition of what makes something a species. The problem lies largely in our attempt to, as Charles Darwin put it, “define the indefinable”. The processes of evolution and speciation (the formation of new species) are continuous ones, which make it difficult to group the results – this explains why there are currently some 26 proposed definitions (concepts) of what a species is. Perhaps the most well known definition is the Biological Species Concept (BSC). The biological species concept proposes that two individuals (or groups) should be considered distinct species if they are no longer able to mate with each other and produce fertile offspring. To put is another way, under the BSC a species is a group of individuals that freely interbreed with each other under ‘natural conditions’ (another sticking point!) to produce offspring that can reproduce for themselves. Some argue that this definition is weakened by animals such as ligers and tigrons (lion and tiger hybrids). If a male lion mates with a female tiger, the resulting liger can be fertile; however, male ligers are sterile and so further liger-liger matings couldn’t result in fertilization (although a female liger was successfully mated with a male lion). Arguably, such cases could be overlooked because the two species are allopatric (i.e. they don’t live in the same regions), so matings in the wild are very unlikely to occur – none the less, there are reports of female tigers mating with lions. Similarly, an ass (horse-donkey hybrid) can sometimes be fertile as can some other hybrids. The bigger problem with the BSC is what to do with animals like sponges, planarians and echinoderms that don’t reproduce sexually (the asexual species). Despite these issues, it is fair to say that the BSC works well for most animals. In a bid to address some of the gaps in the BSC, many other species concepts have been proposed: there are currently about 26 different published concepts! Each concept tries to provide an all-inclusive definition of what it means to be a species, but none are without their problems. In terms of practicality, some biologists lean towards the General Lineage Concept (GLC). The GLC states that as different lineages evolve and diverge their genotype (genetic make-up) and phenotype (physical appearance) change to the point where, eventually, you can assign an animal to one species or the other. So, in essence the GLC and BSC aren’t all that different. The GLC is saying that species are lineages that retain their integrity -- with respect to other lineages -- over time and space (i.e. they don’t merge -- interbreed -- with each other), while the BSC states that species form when populations become reproductively isolated from each other. The advent of molecular and genetic techniques has greatly enhanced our ability to assess what constitutes a species and untangle how that species fits in next to all the others. Molecular and genetic typing has seen to it that we are no longer restricted to basing our interpretations simply on how an organism looks. Consequently, perhaps the biggest ‘rival’ to the BSC is now the Phylogenetic Species Concept (PSC), which does away with sex altogether. The PSC centres on monophyly; it states that related organisms share characters because they share a common ancestor. You start with large groups and (based on synapomorphies – sensu Niles Elredge and Joel Cracraft) split them up into ever smaller ones until you arrive at a group that can be split no further: according to the PSC, this is a species. Some critics argue that the PSC leads to an ‘over splitting’ of species, although as Carl Zimmer points out in his article, many think that we should just go where the data lead us rather than worrying about the number of species we end up with. In the end, it seems that the best option is to consider as many lines of evidence as possible (ideally incorporating genetic data) when considering whether the critter you’re looking at is a species in its own right. When we consider behavioural, genetic and ecological evidence, some argue that we are in a good position to classify even the most difficult of organisms: the microbes. The jury is still very much out on the best way to proceed when it comes to defining a species, but the molecular and genetic tools at our disposal will no doubt play an increasingly large roll in subsequent hypotheses. Moving the goalposts Those who do their best to follow the rather tumultuous world of taxonomy can often become confused and frustrated when species are re-classified; especially if this happens several times in a short period. A good example of this is the taxonomic history of the Sandtiger shark (Carcharias taurus), which Aidan Martin reviewed in an article on his site. The point to remember is that organisms aren’t re-classified capriciously or whimsically – any reassignments come about as a result of new evidence. Hopefully, as Science forges ahead it will allow taxonomists to get a better handle on the interrelationships of plants, fungi, animals and microorganisms and changes, while almost inevitable, will occur less frequently. In the meantime, as Aidan put it: “Nature is messy; Science is tentative; as long as these truths remain relevant to biological research, scientific names will continue to be revised."
i don't know
Which organ is responsible for regulating the blood sugar level?
Insulin Regulation of Blood Sugar and Diabetes - The Important Roles of Insulin and Glucagon: Diabetes and Hypoglycemia Normal Regulation of Blood Glucose The Important Roles of Insulin and Glucagon: Diabetes and Hypoglycemia Written by James Norman MD, FACS, FACE The human body wants blood glucose (blood sugar) maintained in a very narrow range. Insulin and glucagon are the hormones which make this happen. Both insulin and glucagon are secreted from the pancreas, and thus are referred to as pancreatic endocrine hormones. The picture on the left shows the intimate relationship both insulin and glucagon have to each other. Note that the pancreas serves as the central player in this scheme.  It is the production of insulin and glucagon by the pancreas which ultimately determines if a patient has diabetes, hypoglycemia, or some other sugar problem. In this Article Insulin's Role in Blood Glucose Control Insulin Basics: How Insulin Helps Control Blood Glucose Levels Insulin and glucagon are hormones secreted by islet cells within the pancreas. They are both secreted in response to blood sugar levels, but in opposite fashion! Insulin is normally secreted by the beta cells (a type of islet cell) of the pancreas. The stimulus for insulin secretion is a HIGH blood glucose...it's as simple as that!  Although there is always a low level of insulin secreted by the pancreas, the amount secreted into the blood increases as the blood glucose rises. Similarly, as blood glucose falls, the amount of insulin secreted by the pancreatic islets goes down.  As can be seen in the picture, insulin has an effect on a number of cells, including muscle, red blood cells, and fat cells.  In response to insulin, these cells absorb glucose out of the blood, having the net effect of lowering the high blood glucose levels into the normal range. Glucagon is secreted by the alpha cells of the pancreatic islets in much the same manner as insulin...except in the opposite direction. If blood glucose is high, then no glucagon is secreted.  When blood glucose goes LOW, however, (such as between meals, and during exercise) more and more glucagon is secreted. Like insulin, glucagon has an effect on many cells of the body, but most notably the liver. The Role of Glucagon in Blood Glucose Control The effect of glucagon is to make the liver release the glucose it has stored in its cells into the bloodstream, with the net effect of increasing blood glucose. Glucagon also induces the liver (and some other cells such as muscle) to make glucose out of building blocks obtained from other nutrients found in the body (eg, protein). Our bodies desire blood glucose to be maintained between 70 mg/dl and 110 mg/dl (mg/dl means milligrams of glucose in 100 milliliters of blood). Below 70 is termed "hypoglycemia." Above 110 can be normal if you have eaten within 2 to 3 hours.  That is why your doctor wants to measure your blood glucose while you are fasting...it should be between 70 and 110.  Even after you have eaten, however, your glucose should be below 180. Above 180 is termed "hyperglycemia" (which translates to mean "too much glucose in the blood"). If your 2 two blood sugar measurements above 200 after drinking a sugar-water drink (glucose tolerance test), then you are diagnosed with diabetes.   Updated on: 03/02/16
Pancreas
What is the scientific name for the human ''tail'?
What Organ Regulates the Amount of Glucose in the Bloodstream? | Healthy Eating | SF Gate What Organ Regulates the Amount of Glucose in the Bloodstream? What Organ Regulates the Amount of Glucose in the Bloodstream? The pancreas influences how your body uses glucose. Glucose in the bloodstream provides the primary fuel for all body tissues. Blood glucose levels are highest during the digestive period after a meal. Your blood sugar is lowest when the stomach and intestines are empty. Under normal circumstances, the body tightly controls the amount of insulin in your blood. An organ called the pancreas, which is tucked behind the stomach releases the hormones insulin and glucagon to regulate blood sugar levels. Pancreas 101 Blood sugar regulation is crucial because high and low blood glucose can cause health problems. The pancreas is an elongated organ wide on one end and slender on the other end and measures about 25 centimeters in length. It has dual functions: it releases digestive enzymes, which plays a role in digestion, and it secretes hormones. Prevents High Blood Glucose Insulin plays an integral role in preventing high blood sugar. After you eat a meal and your blood-glucose rises, your pancreas senses your blood-sugar level. When the glucose in your bloodstream becomes high, the pancreas releases insulin into your bloodstream. A small clump of pancreatic cells called the ''islets of Langerhans,'' manufacture insulin. Once the insulin is in your bloodstream, it allows your cells to absorb and use glucose as a fuel source. Mediates Low Blood Sugar When you consume more carbohydrate than your body needs at the time, your body stores the extra glucose as glycogen in the liver. The pancreas continuously monitors your blood sugar levels. When glucose is low, the pancreas releases the hormone glucagon. The glucagon triggers the liver to break down glycogen and converts it back to glucose. The stored glucose enters the bloodstream and raises blood-glucose levels. This allows the body to keep blood sugar levels stable in between meals. Blood Glucose Problems In some cases, the pancreas is unable to produce enough insulin, which causes blood sugar to remain high. This is what happens in Type 1 diabetes. In Type 2 diabetes, the pancreas may produce insulin, but the tissues may become less sensitive in responding. This causes the pancreas to release more insulin, which can lead to a vicious cycle. The pancreas can over-react and release a large amount of insulin in relation to the amount of glucose in the blood.
i don't know
When might a person show rapid eye movement (REM)?
Understanding Dreams and REM Sleep Understanding Dreams and REM Sleep Search the site Updated August 02, 2016 What are Dreams? Dreams happen during the rapid eye movement (REM) stage of sleep . In a typical night, you dream for a total of 2 hours, broken up by the sleep cycle . Researchers do not know much about how we dream or why. They do know that newborns dream and that depriving rats of REM sleep greatly shortens their lives. Other mammals and birds also have REM sleep stages, but cold-blooded animals such as turtles, lizards and fish do not. REM Sleep and Dreaming REM sleep usually begins after a period of deep sleep known as stage 4 sleep. An area of the brain called the pons--where REM sleep signals originate--shuts off signals to the spinal cord . That causes the body to be immobile during REM sleep. When the pons doesn't shut down the spinal cord's signals, people will act out their dreams. This can be dangerous because acting out dreams without input from the senses can lead a person to run into walls, fall down stairs or worse. This condition is rare and different from more common sleepwalking and known as “REM sleep behavior disorder.” The pons also sends signals to cerebral cortex by way of the thalamus (which is a filter and relay for sensory information and motor control functions deep in the brain). The cerebral cortex is the part of the brain involved with processing information (learning, thinking and organizing). The areas of the brain “turned on” during REM sleep seem to help learning and memory. Infants spend almost 50 percent of their sleep time in REM sleep (compared to 20 percent for adults), which may be explained by the tremendous amount of learning in infancy. If people are taught various skills and then deprived of REM sleep, they often cannot remember what they were taught. The Meaning of Dreams Dreams may be one way that the brain consolidates memories. The dream time could be a period when the brain can reorganize and review the day’s events and connect new experiences to older ones. Because the body is shut down, the brain can do this without additional input coming in or risking the body “acting out” the day’s memories. Some researchers believe that dreams are more like a background “noise” that is interpreted and organized. This theory states that dreams are merely the brain’s attempt to make sense of random signals occurring during sleep. Some people have more control over their dreams than others. For these people, the last thoughts before going to bed may influence the content of a dream. Of course, psychologists and most people look for greater meaning and insight in dreams. Here are some common dreams with interpretations: Falling: Dreams of falling are said to indicate insecurity. Freud thought dreams of falling meant the contemplation of giving into a sexual urge. Flying: Dreams of flying are said to indicate feeling in control or 'on top of' a situation. The Naked Dream: Dreams of being naked are said to indicate that you are ashamed of something or have something to hide. Personally, these interpretations feel a bit too pop psych to me. I think by engaging with your dreams and thinking about them you can determine what meaning might be conveyed for your life. (I keep having a dream about forgetting to wear socks, please leave comments if you have any insight). You can develop your ability to remember your dreams by keeping a journal near your bed and writing down everything you can about your dreams when you first wake up. After a few weeks, your ability to remember your dreams will improve. Some people claim that they have lucid dreams, which are dreams in which they can participate and change the dream as it develops. Lucid dreaming can be triggered through a number of techniques, though little research and lots of speculation has been done on it.
during sleep
Which organ removes excess water from the blood?
Brain Basics: Understanding Sleep | National Institute of Neurological Disorders and Stroke National Institute of Neurological Disorders and Stroke Home » Disorders » Patient & Caregiver Education Brain Basics: Understanding Sleep Do you ever feel sleepy or "zone out" during the day? Do you find it hard to wake up on Monday mornings? If so, you are familiar with the powerful need for sleep. However, you may not realize that sleep is as essential for your well-being as food and water. Tips for a Good Night's Sleep Sleep: A Dynamic Activity Until the 1950s, most people thought of sleep as a passive, dormant part of our daily lives. We now know that our brains are very active during sleep. Moreover, sleep affects our daily functioning and our physical and mental health in many ways that we are just beginning to understand. Nerve-signaling chemicals called neurotransmitters control whether we are asleep or awake by acting on different groups of nerve cells, or neurons, in the brain. Neurons in the brainstem, which connects the brain with the spinal cord, produce neurotransmitters such as serotonin and norepinephrine that keep some parts of the brain active while we are awake. Other neurons at the base of the brain begin signaling when we fall asleep. These neurons appear to "switch off" the signals that keep us awake. Research also suggests that a chemical called adenosine builds up in our blood while we are awake and causes drowsiness. This chemical gradually breaks down while we sleep. During sleep, we usually pass through five phases of sleep: stages 1, 2, 3, 4, and REM (rapid eye movement) sleep. These stages progress in a cycle from stage 1 to REM sleep, then the cycle starts over again with stage 1 (see  figure 1  ). We spend almost 50 percent of our total sleep time in stage 2 sleep, about 20 percent in REM sleep, and the remaining 30 percent in the other stages. Infants, by contrast, spend about half of their sleep time in REM sleep. During stage 1, which is light sleep, we drift in and out of sleep and can be awakened easily. Our eyes move very slowly and muscle activity slows. People awakened from stage 1 sleep often remember fragmented visual images. Many also experience sudden muscle contractions called hypnic myoclonia, often preceded by a sensation of starting to fall. These sudden movements are similar to the "jump" we make when startled. When we enter stage 2 sleep, our eye movements stop and our brain waves (fluctuations of electrical activity that can be measured by electrodes) become slower, with occasional bursts of rapid waves called sleep spindles. In stage 3, extremely slow brain waves called delta waves begin to appear, interspersed with smaller, faster waves. By stage 4, the brain produces delta waves almost exclusively. It is very difficult to wake someone during stages 3 and 4, which together are called deep sleep. There is no eye movement or muscle activity. People awakened during deep sleep do not adjust immediately and often feel groggy and disoriented for several minutes after they wake up. Some children experience bedwetting, night terrors, or sleepwalking during deep sleep. When we switch into REM sleep, our breathing becomes more rapid, irregular, and shallow, our eyes jerk rapidly in various directions, and our limb muscles become temporarily paralyzed. Our heart rate increases, our blood pressure rises, and males develop penile erections. When people awaken during REM sleep, they often describe bizarre and illogical tales – dreams. The first REM sleep period usually occurs about 70 to 90 minutes after we fall asleep. A complete sleep cycle takes 90 to 110 minutes on average. The first sleep cycles each night contain relatively short REM periods and long periods of deep sleep. As the night progresses, REM sleep periods increase in length while deep sleep decreases. By morning, people spend nearly all their sleep time in stages 1, 2, and REM. People awakened after sleeping more than a few minutes are usually unable to recall the last few minutes before they fell asleep. This sleep-related form of amnesia is the reason people often forget telephone calls or conversations they've had in the middle of the night. It also explains why we often do not remember our alarms ringing in the morning if we go right back to sleep after turning them off. Since sleep and wakefulness are influenced by different neurotransmitter signals in the brain, foods and medicines that change the balance of these signals affect whether we feel alert or drowsy and how well we sleep. Caffeinated drinks such as coffee and drugs such as diet pills and decongestants stimulate some parts of the brain and can cause insomnia,or an inability to sleep. Many antidepressants suppress REM sleep. Heavy smokers often sleep very lightly and have reduced amounts of REM sleep. They also tend to wake up after 3 or 4 hours of sleep due to nicotine withdrawal. Many people who suffer from insomnia try to solve the problem with alcohol – the so-called night cap. While alcohol does help people fall into light sleep, it also robs them of REM and the deeper, more restorative stages of sleep. Instead, it keeps them in the lighter stages of sleep, from which they can be awakened easily. People lose some of the ability to regulate their body temperature during REM, so abnormally hot or cold temperatures in the environment can disrupt this stage of sleep. If our REM sleep is disrupted one night, our bodies don't follow the normal sleep cycle progression the next time we doze off. Instead, we often slip directly into REM sleep and go through extended periods of REM until we "catch up" on this stage of sleep. People who are under anesthesia or in a coma are often said to be asleep. However, people in these conditions cannot be awakened and do not produce the complex, active brain wave patterns seen in normal sleep. Instead, their brain waves are very slow and weak, sometimes all but undetectable. Top How Much Sleep Do We Need? The amount of sleep each person needs depends on many factors, including age. Infants generally require about 16 hours a day, while teenagers need about 9 hours on average. For most adults, 7 to 8 hours a night appears to be the best amount of sleep. Women in the first 3 months of pregnancy often need several more hours of sleep than usual. The amount of sleep a person needs also increases if he or she has been deprived of sleep in previous days. Getting too little sleep creates a "sleep debt," which is much like being overdrawn at a bank. Eventually, your body will demand that the debt be repaid. We don't seem to adapt to getting less sleep than we need; while we may get used to a sleep-depriving schedule, our judgment, reaction time, and other functions are still impaired. People tend to sleep more lightly and for shorter time spans as they get older, although they generally need about the same amount of sleep as they needed in early adulthood. About half of all people over 65 have frequent sleeping problems, such as insomnia, and deep sleep stages in many elderly people often become very short or stop completely. This change may be a normal part of aging, or it may result from medical problems that are common in elderly people and from the medications and other treatments for those problems. Experts say that if you feel drowsy during the day, even during boring activities, you haven't had enough sleep. If you routinely fall asleep within 5 minutes of lying down, you probably have severe sleep deprivation, possibly even a sleep disorder. Microsleeps, or very brief episodes of sleep in an otherwise awake person, are another mark of sleep deprivation. In many cases, people are not aware that they are experiencing microsleeps. The widespread practice of "burning the candle at both ends" in western industrialized societies has created so much sleep deprivation that what is really abnormal sleepiness is now almost the norm. Many studies make it clear that sleep deprivation is dangerous. Sleep-deprived people who are tested by using a driving simulator or by performing a hand-eye coordination task perform as badly as or worse than those who are intoxicated. Sleep deprivation also magnifies alcohol's effects on the body, so a fatigued person who drinks will become much more impaired than someone who is well-rested. Driver fatigue is responsible for an estimated 100,000 motor vehicle accidents and 1500 deaths each year, according to the National Highway Traffic Safety Administration. Since drowsiness is the brain's last step before falling asleep, driving while drowsy can – and often does – lead to disaster. Caffeine and other stimulants cannot overcome the effects of severe sleep deprivation. The National Sleep Foundation says that if you have trouble keeping your eyes focused, if you can't stop yawning, or if you can't remember driving the last few miles, you are probably too drowsy to drive safely. Top What Does Sleep Do For Us? Although scientists are still trying to learn exactly why people need sleep, animal studies show that sleep is necessary for survival. For example, while rats normally live for two to three years, those deprived of REM sleep survive only about 5 weeks on average, and rats deprived of all sleep stages live only about 3 weeks. Sleep-deprived rats also develop abnormally low body temperatures and sores on their tail and paws. The sores may develop because the rats' immune systems become impaired. Some studies suggest that sleep deprivation affects the immune system in detrimental ways. Sleep appears necessary for our nervous systems to work properly. Too little sleep leaves us drowsy and unable to concentrate the next day. It also leads to impaired memory and physical performance and reduced ability to carry out math calculations. If sleep deprivation continues, hallucinations and mood swings may develop. Some experts believe sleep gives neurons used while we are awake a chance to shut down and repair themselves. Without sleep, neurons may become so depleted in energy or so polluted with byproducts of normal cellular activities that they begin to malfunction. Sleep also may give the brain a chance to exercise important neuronal connections that might otherwise deteriorate from lack of activity. Deep sleep coincides with the release of growth hormone in children and young adults. Many of the body's cells also show increased production and reduced breakdown of proteins during deep sleep. Since proteins are the building blocks needed for cell growth and for repair of damage from factors like stress and ultraviolet rays, deep sleep may truly be "beauty sleep." Activity in parts of the brain that control emotions, decision-making processes, and social interactions is drastically reduced during deep sleep, suggesting that this type of sleep may help people maintain optimal emotional and social functioning while they are awake. A study in rats also showed that certain nerve-signaling patterns which the rats generated during the day were repeated during deep sleep. This pattern repetition may help encode memories and improve learning. Top Dreaming and REM Sleep We typically spend more than 2 hours each night dreaming. Scientists do not know much about how or why we dream. Sigmund Freud, who greatly influenced the field of psychology, believed dreaming was a "safety valve" for unconscious desires. Only after 1953, when researchers first described REM in sleeping infants, did scientists begin to carefully study sleep and dreaming. They soon realized that the strange, illogical experiences we call dreams almost always occur during REM sleep. While most mammals and birds show signs of REM sleep, reptiles and other cold-blooded animals do not. REM sleep begins with signals from an area at the base of the brain called the pons (see  figure 2  ). These signals travel to a brain region called the thalamus, which relays them to the cerebral cortex – the outer layer of the brain that is responsible for learning, thinking, and organizing information. The pons also sends signals that shut off neurons in the spinal cord, causing temporary paralysis of the limb muscles. If something interferes with this paralysis, people will begin to physically "act out" their dreams – a rare, dangerous problem called REM sleep behavior disorder. A person dreaming about a ball game, for example, may run headlong into furniture or blindly strike someone sleeping nearby while trying to catch a ball in the dream. REM sleep stimulates the brain regions used in learning. This may be important for normal brain development during infancy, which would explain why infants spend much more time in REM sleep than adults (see  Sleep: A Dynamic Activity  ). Like deep sleep, REM sleep is associated with increased production of proteins. One study found that REM sleep affects learning of certain mental skills. People taught a skill and then deprived of non-REM sleep could recall what they had learned after sleeping, while people deprived of REM sleep could not. Some scientists believe dreams are the cortex's attempt to find meaning in the random signals that it receives during REM sleep. The cortex is the part of the brain that interprets and organizes information from the environment during consciousness. It may be that, given random signals from the pons during REM sleep, the cortex tries to interpret these signals as well, creating a "story" out of fragmented brain activity. Top Sleep and Circadian Rhythms Circadian rhythms are regular changes in mental and physical characteristics that occur in the course of a day (circadian is Latin for "around a day"). Most circadian rhythms are controlled by the body's biological "clock." This clock, called thesuprachiasmatic nucleus or SCN (see  figure 2  ), is actually a pair of pinhead-sized brain structures that together contain about 20,000 neurons. The SCN rests in a part of the brain called the hypothalamus, just above the point where the optic nerves cross. Light that reaches photoreceptors in the retina (a tissue at the back of the eye) creates signals that travel along the optic nerve to the SCN. Signals from the SCN travel to several brain regions, including the pineal gland, which responds to light-induced signals by switching off production of the hormone melatonin. The body's level of melatonin normally increases after darkness falls, making people feel drowsy. The SCN also governs functions that are synchronized with the sleep/wake cycle, including body temperature, hormone secretion, urine production, and changes in blood pressure. By depriving people of light and other external time cues, scientists have learned that most people's biological clocks work on a 25-hour cycle rather than a 24-hour one. But because sunlight or other bright lights can reset the SCN, our biological cycles normally follow the 24-hour cycle of the sun, rather than our innate cycle. Circadian rhythms can be affected to some degree by almost any kind of external time cue, such as the beeping of your alarm clock, the clatter of a garbage truck, or the timing of your meals. Scientists call external time cues zeitgebers (German for "time givers"). When travelers pass from one time zone to another, they suffer from disrupted circadian rhythms, an uncomfortable feeling known as jet lag. For instance, if you travel from California to New York, you "lose" 3 hours according to your body's clock. You will feel tired when the alarm rings at 8 a.m. the next morning because, according to your body's clock, it is still 5 a.m. It usually takes several days for your body's cycles to adjust to the new time. To reduce the effects of jet lag, some doctors try to manipulate the biological clock with a technique called light therapy. They expose people to special lights, many times brighter than ordinary household light, for several hours near the time the subjects want to wake up. This helps them reset their biological clocks and adjust to a new time zone. Symptoms much like jet lag are common in people who work nights or who perform shift work. Because these people's work schedules are at odds with powerful sleep-regulating cues like sunlight, they often become uncontrollably drowsy during work, and they may suffer insomnia or other problems when they try to sleep. Shift workers have an increased risk of heart problems, digestive disturbances, and emotional and mental problems, all of which may be related to their sleeping problems. The number and severity of workplace accidents also tend to increase during the night shift. Major industrial accidents attributed partly to errors made by fatigued night-shift workers include the Exxon Valdez oil spill and the Three Mile Island and Chernobyl nuclear power plant accidents. One study also found that medical interns working on the night shift are twice as likely as others to misinterpret hospital test records, which could endanger their patients. It may be possible to reduce shift-related fatigue by using bright lights in the workplace, minimizing shift changes, and taking scheduled naps. Many people with total blindness experience life-long sleeping problems because their retinas are unable to detect light. These people have a kind of permanent jet lag and periodic insomnia because their circadian rhythms follow their innate cycle rather than a 24-hour one. Daily supplements of melatonin may improve night-time sleep for such patients. However, since the high doses of melatonin found in most supplements can build up in the body, long-term use of this substance may create new problems. Because the potential side effects of melatonin supplements are still largely unknown, most experts discourage melatonin use by the general public. Top Sleep and Disease Sleep and sleep-related problems play a role in a large number of human disorders and affect almost every field of medicine. For example, problems like stroke and asthma attacks tend to occur more frequently during the night and early morning, perhaps due to changes in hormones, heart rate, and other characteristics associated with sleep. Sleep also affects some kinds of epilepsy in complex ways. REM sleep seems to help prevent seizures that begin in one part of the brain from spreading to other brain regions, while deep sleep may promote the spread of these seizures. Sleep deprivation also triggers seizures in people with some types of epilepsy. Neurons that control sleep interact closely with the immune system. As anyone who has had the flu knows, infectious diseases tend to make us feel sleepy. This probably happens because cytokines, chemicals our immune systems produce while fighting an infection, are powerful sleep-inducing chemicals. Sleep may help the body conserve energy and other resources that the immune system needs to mount an attack. Sleeping problems occur in almost all people with mental disorders, including those with depression and schizophrenia. People with depression, for example, often awaken in the early hours of the morning and find themselves unable to get back to sleep. The amount of sleep a person gets also strongly influences the symptoms of mental disorders. Sleep deprivation is an effective therapy for people with certain types of depression, while it can actually cause depression in other people. Extreme sleep deprivation can lead to a seemingly psychotic state of paranoia and hallucinations in otherwise healthy people, and disrupted sleep can trigger episodes of mania (agitation and hyperactivity) in people with manic depression. Sleeping problems are common in many other disorders as well, including Alzheimer's disease, stroke, cancer, and head injury. These sleeping problems may arise from changes in the brain regions and neurotransmitters that control sleep, or from the drugs used to control symptoms of other disorders. In patients who are hospitalized or who receive round-the-clock care, treatment schedules or hospital routines also may disrupt sleep. The old joke about a patient being awakened by a nurse so he could take a sleeping pill contains a grain of truth. Once sleeping problems develop, they can add to a person's impairment and cause confusion, frustration, or depression. Patients who are unable to sleep also notice pain more and may increase their requests for pain medication. Better management of sleeping problems in people who have other disorders could improve these patients' health and quality of life. Top Sleep Disorders At least 40 million Americans each year suffer from chronic, long-term sleep disorders each year, and an additional 20 million experience occasional sleeping problems. These disorders and the resulting sleep deprivation interfere with work, driving, and social activities. They also account for an estimated $16 billion in medical costs each year, while the indirect costs due to lost productivity and other factors are probably much greater. Doctors have described more than 70 sleep disorders, most of which can be managed effectively once they are correctly diagnosed. The most common sleep disorders include insomnia, sleep apnea, restless legs syndrome, and narcolepsy. Narcolepsy   Almost everyone occasionally suffers from short-term insomnia. This problem can result from stress, jet lag, diet, or many other factors. Insomnia almost always affects job performance and well-being the next day. About 60 million Americans a year have insomnia frequently or for extended periods of time, which leads to even more serious sleep deficits. Insomnia tends to increase with age and affects about 40 percent of women and 30 percent of men. It is often the major disabling symptom of an underlying medical disorder. For short-term insomnia, doctors may prescribe sleeping pills. Most sleeping pills stop working after several weeks of nightly use, however, and long-term use can actually interfere with good sleep. Mild insomnia often can be prevented or cured by practicing good sleep habits (see " Tips for a Good Night's Sleep "). For more serious cases of insomnia, researchers are experimenting with light therapy and other ways to alter circadian cycles. Top Sleep Apnea Sleep apnea is a disorder of interrupted breathing during sleep. It usually occurs in association with fat buildup or loss of muscle tone with aging. These changes allow the windpipe to collapse during breathing when muscles relax during sleep (see  figure 3  ). This problem, called obstructive sleep apnea, is usually associated with loud snoring (though not everyone who snores has this disorder). Sleep apnea also can occur if the neurons that control breathing malfunction during sleep. During an episode of obstructive apnea, the person's effort to inhale air creates suction that collapses the windpipe. This blocks the air flow for 10 seconds to a minute while the sleeping person struggles to breathe. When the person's blood oxygen level falls, the brain responds by awakening the person enough to tighten the upper airway muscles and open the windpipe. The person may snort or gasp, then resume snoring. This cycle may be repeated hundreds of times a night. The frequent awakenings that sleep apnea patients experience leave them continually sleepy and may lead to personality changes such as irritability or depression. Sleep apnea also deprives the person of oxygen, which can lead to morning headaches, a loss of interest in sex, or a decline in mental functioning. It also is linked to high blood pressure, irregular heartbeats, and an increased risk of heart attacks and stroke. Patients with severe, untreated sleep apnea are two to three times more likely to have automobile accidents than the general population. In some high-risk individuals, sleep apnea may even lead to sudden death from respiratory arrest during sleep. An estimated 18 million Americans have sleep apnea. However, few of them have had the problem diagnosed. Patients with the typical features of sleep apnea, such as loud snoring, obesity, and excessive daytime sleepiness, should be referred to a specialized sleep center that can perform a test called polysomnography. This test records the patient's brain waves, heartbeat, and breathing during an entire night. If sleep apnea is diagnosed, several treatments are available. Mild sleep apnea frequently can be overcome through weight loss or by preventing the person from sleeping on his or her back. Other people may need special devices or surgery to correct the obstruction. People with sleep apnea should never take sedatives or sleeping pills, which can prevent them from awakening enough to breathe. Top Restless Legs Syndrome Restless legs syndrome (RLS), a familial disorder causing unpleasant crawling, prickling, or tingling sensations in the legs and feet and an urge to move them for relief, is emerging as one of the most common sleep disorders, especially among older people. This disorder, which affects as many as 12 million Americans, leads to constant leg movement during the day and insomnia at night. Severe RLS is most common in elderly people, though symptoms may develop at any age. In some cases, it may be linked to other conditions such as anemia, pregnancy, or diabetes. Many RLS patients also have a disorder known as periodic limb movement disorder or PLMD, which causes repetitive jerking movements of the limbs, especially the legs. These movements occur every 20 to 40 seconds and cause repeated awakening and severely fragmented sleep. In one study, RLS and PLMD accounted for a third of the insomnia seen in patients older than age 60. RLS and PLMD often can be relieved by drugs that affect the neurotransmitter dopamine, suggesting that dopamine abnormalities underlie these disorders' symptoms. Learning how these disorders occur may lead to better therapies in the future. Top Narcolepsy Narcolepsy affects an estimated 250,000 Americans. People with narcolepsy have frequent "sleep attacks" at various times of the day, even if they have had a normal amount of night-time sleep. These attacks last from several seconds to more than 30 minutes. People with narcolepsy also may experience cataplexy (loss of muscle control during emotional situations), hallucinations, temporary paralysis when they awaken, and disrupted night-time sleep. These symptoms seem to be features of REM sleep that appear during waking, which suggests that narcolepsy is a disorder of sleep regulation. The symptoms of narcolepsy typically appear during adolescence, though it often takes years to obtain a correct diagnosis. The disorder (or at least a predisposition to it) is usually hereditary, but it occasionally is linked to brain damage from a head injury or neurological disease. Once narcolepsy is diagnosed, stimulants, antidepressants, or other drugs can help control the symptoms and prevent the embarrassing and dangerous effects of falling asleep at improper times. Naps at certain times of the day also may reduce the excessive daytime sleepiness. In 1999, a research team working with canine models identified a gene that causes narcolepsy–a breakthrough that brings a cure for this disabling condition within reach. The gene, hypocretin receptor 2, codes for a protein that allows brain cells to receive instructions from other cells. The defective versions of the gene encode proteins that cannot recognize these messages, perhaps cutting the cells off from messages that promote wakefulness. The researchers know that the same gene exists in humans, and they are currently searching for defective versions in people with narcolepsy. Top The Future Sleep research is expanding and attracting more and more attention from scientists. Researchers now know that sleep is an active and dynamic state that greatly influences our waking hours, and they realize that we must understand sleep to fully understand the brain. Innovative techniques, such as brain imaging, can now help researchers understand how different brain regions function during sleep and how different activities and disorders affect sleep. Understanding the factors that affect sleep in health and disease also may lead to revolutionary new therapies for sleep disorders and to ways of overcoming jet lag and the problems associated with shift work. We can expect these and many other benefits from research that will allow us to truly understand sleep's impact on our lives. Tips for a Good Night's Sleep: Adapted from "When You Can't Sleep: The ABCs of ZZZs," by the National Sleep Foundation. Set a schedule: Go to bed at a set time each night and get up at the same time each morning. Disrupting this schedule may lead to insomnia. "Sleeping in" on weekends also makes it harder to wake up early on Monday morning because it re-sets your sleep cycles for a later awakening. Exercise: Try to exercise 20 to 30 minutes a day. Daily exercise often helps people sleep, although a workout soon before bedtime may interfere with sleep. For maximum benefit, try to get your exercise about 5 to 6 hours before going to bed. Avoid caffeine, nicotine, and alcohol: Avoid drinks that contain caffeine, which acts as a stimulant and keeps people awake. Sources of caffeine include coffee, chocolate, soft drinks, non-herbal teas, diet drugs, and some pain relievers. Smokers tend to sleep very lightly and often wake up in the early morning due to nicotine withdrawal. Alcohol robs people of deep sleep and REM sleep and keeps them in the lighter stages of sleep. Relax before bed: A warm bath, reading, or another relaxing routine can make it easier to fall sleep. You can train yourself to associate certain restful activities with sleep and make them part of your bedtime ritual. Sleep until sunlight: If possible, wake up with the sun, or use very bright lights in the morning. Sunlight helps the body's internal biological clock reset itself each day. Sleep experts recommend exposure to an hour of morning sunlight for people having problems falling asleep. Don't lie in bed awake: If you can't get to sleep, don't just lie in bed. Do something else, like reading, watching television, or listening to music, until you feel tired. The anxiety of being unable to fall asleep can actually contribute to insomnia. Control your room temperature: Maintain a comfortable temperature in the bedroom. Extreme temperatures may disrupt sleep or prevent you from falling asleep. See a doctor if your sleeping problem continues: If you have trouble falling asleep night after night, or if you always feel tired the next day, then you may have a sleep disorder and should see a physician. Your primary care physician may be able to help you; if not, you can probably find a sleep specialist at a major hospital near you. Most sleep disorders can be treated effectively, so you can finally get that good night's sleep you need. For information on other neurological disorders or research programs funded by the National Institute of Neurological Disorders and Stroke, contact the Institute's Brain Resources and Information Network (BRAIN) at: BRAIN Office of Communications and Public Liaison National Institute of Neurological Disorders and Stroke National Institutes of Health Bethesda, MD 20892   NINDS health-related material is provided for information purposes only and does not necessarily represent endorsement by or an official position of the National Institute of Neurological Disorders and Stroke or any other Federal agency. Advice on the treatment or care of an individual patient should be obtained through consultation with a physician who has examined that patient or is familiar with that patient's medical history. All NINDS-prepared information is in the public domain and may be freely copied. Credit to the NINDS or the NIH is appreciated.
i don't know
Which is the most acidic part of the digestive system?
Digestive System: Facts, Function & Diseases Digestive System: Facts, Function & Diseases By Kim Ann Zimmermann, Live Science Contributor | March 11, 2016 05:15pm ET MORE The human digestive system is a series of organs that converts food into essential nutrients that are absorbed into the body and eliminates unused waste material. It is essential to good health because if the digestive system shuts down, the body cannot be nourished or rid itself of waste. Description of the digestive system Also known as the gastrointestinal (GI) tract, the digestive system begins at the mouth, includes the esophagus, stomach, small intestine, large intestine (also known as the colon) and rectum, and ends at the anus. The entire system — from mouth to anus — is about 30 feet (9 meters) long, according to the  American Society of Gastrointestinal Endoscopy  (ASGE).  Digestion begins with the mouth. Even the smell of food can generate saliva, which is secreted by the salivary glands in the mouth, contains an enzyme, salivary amylase, which breaks down starch. Teeth, which are part of the skeletal system, play a key role in digestion. In carnivores, teeth are designed for killing and breaking down meat. Herbivores’ teeth are made for grinding plants and other food to ease them through the digestion process.  [ Image Gallery: The BioDigital Human ] Swallowing pushes chewed food into the esophagus, where it passes through the oropharynx and hypopharynx. At this point, food takes the form of a small round mass and digestion becomes involuntary. A series of muscular contractions, called peristalsis, transports food through the rest of the system. The esophagus empties into the stomach, according to the  National Institutes of Health  (NIH).  The stomach’s gastric juice, which is primarily a mix of hydrochloric acid and pepsin, starts breaking down proteins and killing potentially harmful bacteria, according to ASGE. After an hour or two of this process, a thick semi-liquid paste, called chyme, forms. At this point the pyloric sphincter valve opens and chyme enters the duodenum, where it mixes with digestive enzymes from the pancreas and acidic bile from the gall bladder, according to the  Cleveland Clinic . The next stop for the chyme is the small intestine, a 20-foot (6-meter) tube-shaped organ, where the majority of the absorption of nutrients occurs. The nutrients move into the bloodstream and are transported to the liver.  The liver creates glycogen from sugars and carbohydrates to give the body energy and converts dietary proteins into new proteins needed by the blood system. The liver also breaks down unwanted chemicals, such as alcohol, which is detoxified and passed from the body as waste, the Cleveland Clinic noted. Whatever material is left goes into the large intestine. The function of the large intestine, which is about 5 feet long (1.5 meters), is primarily for storage and fermentation of indigestible matter. Also called the colon, it has four parts: the ascending colon, the transverse colon, the descending colon and the sigmoid colon. This is where water from the chyme is absorbed back into the body and feces are formed primarily from water (75 percent), dietary fiber and other waste products, according to the Cleveland Clinic. Feces are stored here until they are eliminated from the body through defecation. Diseases of the digestive system Many symptoms can signal problems with the GI tract, including: abdominal pain, blood in the stool, bloating, constipation, diarrhea, heartburn, incontinence, nausea and vomiting and difficulty swallowing, according to the NIH. Among the most widely known diseases of the digestive system is  colon cancer . According to the Centers for Disease Control (CDC), 51,783 Americans died from colon cancer in 2011 (the most recent year for available data). Excluding skin cancers, colon and rectal cancer, or colorectal cancer, is the third most common cancer diagnosed in both men and women in the United States, according to the  American Cancer Society . Polyp growth and irregular cells, which may or may not be cancerous, are the most common development paths for colorectal cancers (also referred to as CRC), and can be detected during a routine colonoscopy, according to Dr. John Marks, a gastroenterologist affiliated with the  Main Line Health  health care system. “The best news is that, if caught early enough, they can also be removed during the colonoscopy — eliminating the possibility that they grow further and become cancer,” Marks said.  For those patients whose cancer has already spread, there are various minimally invasive surgical options that have extremely good prognoses. It is recommended that asymptomatic patients without a family history begin getting tested regularly between the ages 45 and 50, according to Marks. “Symptoms which may suggest that you need a colonoscopy at an earlier age include rectal bleeding and stool/bowel habit changes which last for more than a few days.” While CRC gets a great deal of attention, many diseases and conditions of the digestive system — including  irritable bowel syndrome , diverticulitis,  GERD (acid reflux)  and  Crohn’s disease  — can be chronic and are difficult to diagnose and treat, according to Dr. Larry Good, a gastroenterologist affiliated with  South Nassau Communities Hospital . “With many of these diseases, blood work and colonoscopies all looks normal, so there is an absence of red flags.”  Many of the diseases of the digestive system are tied to the foods we eat, and a number of sufferers can reduce their symptoms by restricting their diets, Good said. “Of course no one wants to hear that they can’t eat certain foods, but many times, eliminating acidic things from the diet, such as tomatoes, onions, and red wine, can have an impact,” Good said.  There are a number of tests to detect digestive tract ailments. A colonoscopy is the examination of the inside of the colon using a long, flexible, fiber-optic viewing instrument called a colonoscope, according the  American Gastroenterological Association.  Other testing procedures include upper GI endoscopy, capsule endoscopy, endoscopic retrograde cholangiopancreatography and endoscopic ultrasound. Study of the digestive system Gastroenterology is the branch of medicine focused on studying and treating the digestive system disorders. Physicians practicing this specialty are called gastroenterologists. The name is a combination of three ancient Greek words gastros (stomach), enteron (intestine) and logos (reason). It is an internal medicine subspecialty certified by the  American Board of Internal Medicine .  To be certified as a gastroenterologist, a doctor must pass the Gastroenterology Certification Examination and undergo a minimum of 36 months of additional training.  Milestones References to the digestive system can be traced back to the ancient Egyptians. Some milestones in the study of the gastrointestinal system include: Claudius Galen (circa 130-200) lived at the end of the ancient Greek period and reviewed the teachings of Hippocrates and other Greek doctors. He theorized that the stomach acted independently from other systems in the body, almost with a separate brain. This was widely accepted until the 17th century. In 1780, Italian physician Lazzaro Spallanzani conducted experiments to prove the impact of gastric juice on the digestion process. Philipp Bozzini developed the Lichtleiter in 1805. This instrument, which was used to examine the urinary tract, rectum and pharynx, was the earliest endoscopy. Adolf Kussmaul, a German physician, developed the gastroscope in 1868, using a sword swallower to help develop the diagnostic process. Rudolph Schindler, known to some as the “father of gastroscopy,” described many of the diseases involving the human digestive system in his illustrated textbook issued during World War I. He and Georg Wolf developed a semi-flexible gastroscope in 1932. In 1970, Hiromi Shinya, a Japanese-born general surgeon, delivered the first report of a colonoscopy to the New York Surgical Society and in May 1971 presented his experiences to the American Society for Gastrointestinal Endoscopy. In 2005, Australians Barry Marshall and Robin Warren were awarded the Nobel Prize in Physiology or Medicine for their discovery of Helicobacter pylori and its role in peptic ulcer disease. Editor’s Note: If you’d like more information on this topic, we recommend the following book:
Stomach
A deficiency of which vitamin can cause scurvy?
Digestive System | Everything You Need to Know, Including Pictures Digestive System Anatomy Mouth Food begins its journey through the digestive system in the mouth, also known as the oral cavity . Inside the mouth are many accessory organs that aid in the digestion of food—the tongue, teeth, and salivary glands. Teeth chop food into small pieces, which are moistened by saliva before the tongue and other muscles push the food into the pharynx. Teeth. The teeth are 32 small, hard organs found along the anterior and lateral edges of the mouth. Each tooth is made of a bone-like substance called dentin and covered in a layer of enamel—the hardest substance in the body. Teeth are living organs and contain blood vessels and nerves under the dentin in a soft region known as the pulp. The teeth are designed for cutting and grinding food into smaller pieces. Tongue. The tongue is located on the inferior portion of the mouth just posterior and medial to the teeth. It is a small organ made up of several pairs of muscles covered in a thin, bumpy, skin-like layer. The outside of the tongue contains many rough papillae for gripping food as it is moved by the tongue’s muscles. The taste buds on the surface of the tongue detect taste molecules in food and connect to nerves in the tongue to send taste information to the brain. The tongue also helps to push food toward the posterior part of the mouth for swallowing. Salivary Glands. Surrounding the mouth are 3 sets of salivary glands. The salivary glands are accessory organs that produce a watery secretion known as saliva. Saliva helps to moisten food and begins the digestion of carbohydrates. The body also uses saliva to lubricate food as it passes through the mouth, pharynx, and esophagus. Pharynx The pharynx, or throat, is a funnel-shaped tube connected to the posterior end of the mouth. The pharynx is responsible for the passing of masses of chewed food from the mouth to the esophagus. The pharynx also plays an important role in the respiratory system, as air from the nasal cavity passes through the pharynx on its way to the larynx and eventually the lungs . Because the pharynx serves two different functions, it contains a flap of tissue known as the epiglottis that acts as a switch to route food to the esophagus and air to the larynx . Esophagus The esophagus is a muscular tube connecting the pharynx to the stomach that is part of the  upper gastrointestinal tract . It carries swallowed masses of chewed food along its length. At the inferior end of the esophagus is a muscular ring called the lower esophageal sphincter or cardiac sphincter. The function of this sphincter is to close of the end of the esophagus and trap food in the stomach. Stomach The stomach is a muscular sac that is located on the left side of the abdominal cavity, just inferior to the diaphragm . In an average person, the stomach is about the size of their two fists placed next to each other. This major organ acts as a storage tank for food so that the body has time to digest large meals properly. The stomach also contains hydrochloric acid and digestive enzymes that continue the digestion of food that began in the mouth. Small Intestine The small intestine is a long, thin tube about 1 inch in diameter and about 10 feet long that is part of the  lower gastrointestinal tract . It is located just inferior to the stomach and takes up most of the space in the abdominal cavity. The entire small intestine is coiled like a hose and the inside surface is full of many ridges and folds. These folds are used to maximize the digestion of food and absorption of nutrients. By the time food leaves the small intestine, around 90% of all nutrients have been extracted from the food that entered it. Liver and Gallbladder The liver is a roughly triangular accessory organ of the digestive system located to the right of the stomach, just inferior to the diaphragm and superior to the small intestine. The liver weighs about 3 pounds and is the second largest organ in the body. The liver has many different functions in the body, but the main function of the liver in digestion is the production of bile and its secretion into the small intestine. The gallbladder is a small, pear-shaped organ located just posterior to the liver. The gallbladder is used to store and recycle excess bile from the small intestine so that it can be reused for the digestion of subsequent meals. Pancreas The pancreas is a large gland located just inferior and posterior to the stomach. It is about 6 inches long and shaped like short, lumpy snake with its “head” connected to the duodenum and its “tail” pointing to the left wall of the abdominal cavity. The pancreas secretes digestive enzymes into the small intestine to complete the chemical digestion of foods. Large Intestine The large intestine is a long, thick tube about 2 ½ inches in diameter and about 5 feet long. It is located just inferior to the stomach and wraps around the superior and lateral border of the small intestine. The large intestine absorbs water and contains many symbiotic bacteria that aid in the breaking down of wastes to extract some small amounts of nutrients. Feces in the large intestine exit the body through the anal canal. Digestive System Physiology The digestive system is responsible for taking whole foods and turning them into energy and nutrients to allow the body to function, grow, and repair itself. The six primary processes of the digestive system include: Ingestion of food Secretion of fluids and digestive enzymes Mixing and movement of food and wastes through the body Digestion of food into smaller pieces Absorption of nutrients Excretion of wastes Ingestion The first function of the digestive system is ingestion, or the intake of food. The mouth is responsible for this function, as it is the orifice through which all food enters the body. The mouth and stomach are also responsible for the storage of food as it is waiting to be digested. This storage capacity allows the body to eat only a few times each day and to ingest more food than it can process at one time. Secretion In the course of a day, the digestive system secretes around 7 liters of fluids. These fluids include saliva, mucus, hydrochloric acid, enzymes, and bile. Saliva moistens dry food and contains salivary amylase, a digestive enzyme that begins the digestion of carbohydrates. Mucus serves as a protective barrier and lubricant inside of the GI tract. Hydrochloric acid helps to digest food chemically and protects the body by killing bacteria present in our food. Enzymes are like tiny biochemical machines that disassemble large macromolecules like proteins, carbohydrates, and lipids into their smaller components. Finally, bile is used to emulsify large masses of lipids into tiny globules for easy digestion. Mixing and Movement The digestive system uses 3 main processes to move and mix food: Swallowing. Swallowing is the process of using smooth and skeletal muscles in the mouth, tongue, and pharynx to push food out of the mouth, through the pharynx, and into the esophagus. Peristalsis. Peristalsis is a muscular wave that travels the length of the GI tract, moving partially digested food a short distance down the tract. It takes many waves of peristalsis for food to travel from the esophagus, through the stomach and intestines , and reach the end of the GI tract. Segmentation. Segmentation occurs only in the small intestine as short segments of intestine contract like hands squeezing a toothpaste tube. Segmentation helps to increase the absorption of nutrients by mixing food and increasing its contact with the walls of the intestine. Digestion Digestion is the process of turning large pieces of food into its component chemicals. Mechanical digestion is the physical breakdown of large pieces of food into smaller pieces. This mode of digestion begins with the chewing of food by the teeth and is continued through the muscular mixing of food by the stomach and intestines. Bile produced by the liver is also used to mechanically break fats into smaller globules. While food is being mechanically digested it is also being chemically digested as larger and more complex molecules are being broken down into smaller molecules that are easier to absorb. Chemical digestion begins in the mouth with salivary amylase in saliva splitting complex carbohydrates into simple carbohydrates. The enzymes and acid in the stomach continue chemical digestion, but the bulk of chemical digestion takes place in the small intestine thanks to the action of the pancreas. The pancreas secretes an incredibly strong digestive cocktail known as pancreatic juice, which is capable of digesting lipids, carbohydrates, proteins and nucleic acids. By the time food has left the duodenum , it has been reduced to its chemical building blocks—fatty acids, amino acids, monosaccharides, and nucleotides. Absorption Once food has been reduced to its building blocks, it is ready for the body to absorb. Absorption begins in the stomach with simple molecules like water and alcohol being absorbed directly into the bloodstream. Most absorption takes place in the walls of the small intestine, which are densely folded to maximize the surface area in contact with digested food. Small blood and lymphatic vessels in the intestinal wall pick up the molecules and carry them to the rest of the body. The large intestine is also involved in the absorption of water and vitamins B and K before feces leave the body. Excretion The final function of the digestive system is the excretion of waste in a process known as defecation. Defecation removes indigestible substances from the body so that they do not accumulate inside the gut. The timing of defecation is controlled voluntarily by the conscious part of the brain, but must be accomplished on a regular basis to prevent a backup of indigestible materials. Prepared by Tim Taylor, Anatomy and Physiology Instructor
i don't know