book_volume
stringclasses 3
values | book_title
stringclasses 1
value | chapter_number
stringlengths 1
2
| chapter_title
stringlengths 5
79
| section_number
stringclasses 9
values | section_title
stringlengths 4
93
| section_text
stringlengths 868
48.5k
|
---|---|---|---|---|---|---|
1 | 1 | Atoms in Motion | 1 | Introduction | This two-year course in physics is presented from the point of view that you, the reader, are going to be a physicist. This is not necessarily the case of course, but that is what every professor in every subject assumes! If you are going to be a physicist, you will have a lot to study: two hundred years of the most rapidly developing field of knowledge that there is. So much knowledge, in fact, that you might think that you cannot learn all of it in four years, and truly you cannot; you will have to go to graduate school too! Surprisingly enough, in spite of the tremendous amount of work that has been done for all this time it is possible to condense the enormous mass of results to a large extent—that is, to find laws which summarize all our knowledge. Even so, the laws are so hard to grasp that it is unfair to you to start exploring this tremendous subject without some kind of map or outline of the relationship of one part of the subject of science to another. Following these preliminary remarks, the first three chapters will therefore outline the relation of physics to the rest of the sciences, the relations of the sciences to each other, and the meaning of science, to help us develop a “feel” for the subject. You might ask why we cannot teach physics by just giving the basic laws on page one and then showing how they work in all possible circumstances, as we do in Euclidean geometry, where we state the axioms and then make all sorts of deductions. (So, not satisfied to learn physics in four years, you want to learn it in four minutes?) We cannot do it in this way for two reasons. First, we do not yet know all the basic laws: there is an expanding frontier of ignorance. Second, the correct statement of the laws of physics involves some very unfamiliar ideas which require advanced mathematics for their description. Therefore, one needs a considerable amount of preparatory training even to learn what the words mean. No, it is not possible to do it that way. We can only do it piece by piece. Each piece, or part, of the whole of nature is always merely an approximation to the complete truth, or the complete truth so far as we know it. In fact, everything we know is only some kind of approximation, because we know that we do not know all the laws as yet. Therefore, things must be learned only to be unlearned again or, more likely, to be corrected. The principle of science, the definition, almost, is the following: The test of all knowledge is experiment. Experiment is the sole judge of scientific “truth.” But what is the source of knowledge? Where do the laws that are to be tested come from? Experiment, itself, helps to produce these laws, in the sense that it gives us hints. But also needed is imagination to create from these hints the great generalizations—to guess at the wonderful, simple, but very strange patterns beneath them all, and then to experiment to check again whether we have made the right guess. This imagining process is so difficult that there is a division of labor in physics: there are theoretical physicists who imagine, deduce, and guess at new laws, but do not experiment; and then there are experimental physicists who experiment, imagine, deduce, and guess. We said that the laws of nature are approximate: that we first find the “wrong” ones, and then we find the “right” ones. Now, how can an experiment be “wrong”? First, in a trivial way: if something is wrong with the apparatus that you did not notice. But these things are easily fixed, and checked back and forth. So without snatching at such minor things, how can the results of an experiment be wrong? Only by being inaccurate. For example, the mass of an object never seems to change: a spinning top has the same weight as a still one. So a “law” was invented: mass is constant, independent of speed. That “law” is now found to be incorrect. Mass is found to increase with velocity, but appreciable increases require velocities near that of light. A true law is: if an object moves with a speed of less than one hundred miles a second the mass is constant to within one part in a million. In some such approximate form this is a correct law. So in practice one might think that the new law makes no significant difference. Well, yes and no. For ordinary speeds we can certainly forget it and use the simple constant-mass law as a good approximation. But for high speeds we are wrong, and the higher the speed, the more wrong we are. Finally, and most interesting, philosophically we are completely wrong with the approximate law. Our entire picture of the world has to be altered even though the mass changes only by a little bit. This is a very peculiar thing about the philosophy, or the ideas, behind the laws. Even a very small effect sometimes requires profound changes in our ideas. Now, what should we teach first? Should we teach the correct but unfamiliar law with its strange and difficult conceptual ideas, for example the theory of relativity, four-dimensional space-time, and so on? Or should we first teach the simple “constant-mass” law, which is only approximate, but does not involve such difficult ideas? The first is more exciting, more wonderful, and more fun, but the second is easier to get at first, and is a first step to a real understanding of the first idea. This point arises again and again in teaching physics. At different times we shall have to resolve it in different ways, but at each stage it is worth learning what is now known, how accurate it is, how it fits into everything else, and how it may be changed when we learn more. Let us now proceed with our outline, or general map, of our understanding of science today (in particular, physics, but also of other sciences on the periphery), so that when we later concentrate on some particular point we will have some idea of the background, why that particular point is interesting, and how it fits into the big structure. So, what is our overall picture of the world? |
|
1 | 2 | Basic Physics | 1 | Introduction | In this chapter, we shall examine the most fundamental ideas that we have about physics—the nature of things as we see them at the present time. We shall not discuss the history of how we know that all these ideas are true; you will learn these details in due time. The things with which we concern ourselves in science appear in myriad forms, and with a multitude of attributes. For example, if we stand on the shore and look at the sea, we see the water, the waves breaking, the foam, the sloshing motion of the water, the sound, the air, the winds and the clouds, the sun and the blue sky, and light; there is sand and there are rocks of various hardness and permanence, color and texture. There are animals and seaweed, hunger and disease, and the observer on the beach; there may be even happiness and thought. Any other spot in nature has a similar variety of things and influences. It is always as complicated as that, no matter where it is. Curiosity demands that we ask questions, that we try to put things together and try to understand this multitude of aspects as perhaps resulting from the action of a relatively small number of elemental things and forces acting in an infinite variety of combinations. For example: Is the sand other than the rocks? That is, is the sand perhaps nothing but a great number of very tiny stones? Is the moon a great rock? If we understood rocks, would we also understand the sand and the moon? Is the wind a sloshing of the air analogous to the sloshing motion of the water in the sea? What common features do different movements have? What is common to different kinds of sound? How many different colors are there? And so on. In this way we try gradually to analyze all things, to put together things which at first sight look different, with the hope that we may be able to reduce the number of different things and thereby understand them better. A few hundred years ago, a method was devised to find partial answers to such questions. Observation, reason, and experiment make up what we call the scientific method. We shall have to limit ourselves to a bare description of our basic view of what is sometimes called fundamental physics, or fundamental ideas which have arisen from the application of the scientific method. What do we mean by “understanding” something? We can imagine that this complicated array of moving things which constitutes “the world” is something like a great chess game being played by the gods, and we are observers of the game. We do not know what the rules of the game are; all we are allowed to do is to watch the playing. Of course, if we watch long enough, we may eventually catch on to a few of the rules. The rules of the game are what we mean by fundamental physics. Even if we knew every rule, however, we might not be able to understand why a particular move is made in the game, merely because it is too complicated and our minds are limited. If you play chess you must know that it is easy to learn all the rules, and yet it is often very hard to select the best move or to understand why a player moves as he does. So it is in nature, only much more so; but we may be able at least to find all the rules. Actually, we do not have all the rules now. (Every once in a while something like castling is going on that we still do not understand.) Aside from not knowing all of the rules, what we really can explain in terms of those rules is very limited, because almost all situations are so enormously complicated that we cannot follow the plays of the game using the rules, much less tell what is going to happen next. We must, therefore, limit ourselves to the more basic question of the rules of the game. If we know the rules, we consider that we “understand” the world. How can we tell whether the rules which we “guess” at are really right if we cannot analyze the game very well? There are, roughly speaking, three ways. First, there may be situations where nature has arranged, or we arrange nature, to be simple and to have so few parts that we can predict exactly what will happen, and thus we can check how our rules work. (In one corner of the board there may be only a few chess pieces at work, and that we can figure out exactly.) A second good way to check rules is in terms of less specific rules derived from them. For example, the rule on the move of a bishop on a chessboard is that it moves only on the diagonal. One can deduce, no matter how many moves may be made, that a certain bishop will always be on a red square. So, without being able to follow the details, we can always check our idea about the bishop’s motion by finding out whether it is always on a red square. Of course it will be, for a long time, until all of a sudden we find that it is on a black square (what happened of course, is that in the meantime it was captured, another pawn crossed for queening, and it turned into a bishop on a black square). That is the way it is in physics. For a long time we will have a rule that works excellently in an overall way, even when we cannot follow the details, and then some time we may discover a new rule. From the point of view of basic physics, the most interesting phenomena are of course in the new places, the places where the rules do not work—not the places where they do work! That is the way in which we discover new rules. The third way to tell whether our ideas are right is relatively crude but probably the most powerful of them all. That is, by rough approximation. While we may not be able to tell why Alekhine moves this particular piece, perhaps we can roughly understand that he is gathering his pieces around the king to protect it, more or less, since that is the sensible thing to do in the circumstances. In the same way, we can often understand nature, more or less, without being able to see what every little piece is doing, in terms of our understanding of the game. At first the phenomena of nature were roughly divided into classes, like heat, electricity, mechanics, magnetism, properties of substances, chemical phenomena, light or optics, x-rays, nuclear physics, gravitation, meson phenomena, etc. However, the aim is to see complete nature as different aspects of one set of phenomena. That is the problem in basic theoretical physics, today—to find the laws behind experiment; to amalgamate these classes. Historically, we have always been able to amalgamate them, but as time goes on new things are found. We were amalgamating very well, when all of a sudden x-rays were found. Then we amalgamated some more, and mesons were found. Therefore, at any stage of the game, it always looks rather messy. A great deal is amalgamated, but there are always many wires or threads hanging out in all directions. That is the situation today, which we shall try to describe. Some historic examples of amalgamation are the following. First, take heat and mechanics. When atoms are in motion, the more motion, the more heat the system contains, and so heat and all temperature effects can be represented by the laws of mechanics. Another tremendous amalgamation was the discovery of the relation between electricity, magnetism, and light, which were found to be different aspects of the same thing, which we call today the electromagnetic field. Another amalgamation is the unification of chemical phenomena, the various properties of various substances, and the behavior of atomic particles, which is in the quantum mechanics of chemistry. The question is, of course, is it going to be possible to amalgamate everything, and merely discover that this world represents different aspects of one thing? Nobody knows. All we know is that as we go along, we find that we can amalgamate pieces, and then we find some pieces that do not fit, and we keep trying to put the jigsaw puzzle together. Whether there are a finite number of pieces, and whether there is even a border to the puzzle, is of course unknown. It will never be known until we finish the picture, if ever. What we wish to do here is to see to what extent this amalgamation process has gone on, and what the situation is at present, in understanding basic phenomena in terms of the smallest set of principles. To express it in a simple manner, what are things made of and how few elements are there?
|
|
1 | 2 | Basic Physics | 2 | Physics before 1920 | It is a little difficult to begin at once with the present view, so we shall first see how things looked in about 1920 and then take a few things out of that picture. Before 1920, our world picture was something like this: The “stage” on which the universe goes is the three-dimensional space of geometry, as described by Euclid, and things change in a medium called time. The elements on the stage are particles, for example the atoms, which have some properties. First, the property of inertia: if a particle is moving it keeps on going in the same direction unless forces act upon it. The second element, then, is forces, which were then thought to be of two varieties: First, an enormously complicated, detailed kind of interaction force which held the various atoms in different combinations in a complicated way, which determined whether salt would dissolve faster or slower when we raise the temperature. The other force that was known was a long-range interaction—a smooth and quiet attraction—which varied inversely as the square of the distance, and was called gravitation. This law was known and was very simple. Why things remain in motion when they are moving, or why there is a law of gravitation was, of course, not known. A description of nature is what we are concerned with here. From this point of view, then, a gas, and indeed all matter, is a myriad of moving particles. Thus many of the things we saw while standing at the seashore can immediately be connected. First the pressure: this comes from the collisions of the atoms with the walls or whatever; the drift of the atoms, if they are all moving in one direction on the average, is wind; the random internal motions are the heat. There are waves of excess density, where too many particles have collected, and so as they rush off they push up piles of particles farther out, and so on. This wave of excess density is sound. It is a tremendous achievement to be able to understand so much. Some of these things were described in the previous chapter. What kinds of particles are there? There were considered to be $92$ at that time: $92$ different kinds of atoms were ultimately discovered. They had different names associated with their chemical properties. The next part of the problem was, what are the short-range forces? Why does carbon attract one oxygen or perhaps two oxygens, but not three oxygens? What is the machinery of interaction between atoms? Is it gravitation? The answer is no. Gravity is entirely too weak. But imagine a force analogous to gravity, varying inversely with the square of the distance, but enormously more powerful and having one difference. In gravity everything attracts everything else, but now imagine that there are two kinds of “things,” and that this new force (which is the electrical force, of course) has the property that likes repel but unlikes attract. The “thing” that carries this strong interaction is called charge. Then what do we have? Suppose that we have two unlikes that attract each other, a plus and a minus, and that they stick very close together. Suppose we have another charge some distance away. Would it feel any attraction? It would feel practically none, because if the first two are equal in size, the attraction for the one and the repulsion for the other balance out. Therefore there is very little force at any appreciable distance. On the other hand, if we get very close with the extra charge, attraction arises, because the repulsion of likes and attraction of unlikes will tend to bring unlikes closer together and push likes farther apart. Then the repulsion will be less than the attraction. This is the reason why the atoms, which are constituted out of plus and minus electric charges, feel very little force when they are separated by appreciable distance (aside from gravity). When they come close together, they can “see inside” each other and rearrange their charges, with the result that they have a very strong interaction. The ultimate basis of an interaction between the atoms is electrical. Since this force is so enormous, all the plusses and all minuses will normally come together in as intimate a combination as they can. All things, even ourselves, are made of fine-grained, enormously strongly interacting plus and minus parts, all neatly balanced out. Once in a while, by accident, we may rub off a few minuses or a few plusses (usually it is easier to rub off minuses), and in those circumstances we find the force of electricity unbalanced, and we can then see the effects of these electrical attractions. To give an idea of how much stronger electricity is than gravitation, consider two grains of sand, a millimeter across, thirty meters apart. If the force between them were not balanced, if everything attracted everything else instead of likes repelling, so that there were no cancellation, how much force would there be? There would be a force of three million tons between the two! You see, there is very, very little excess or deficit of the number of negative or positive charges necessary to produce appreciable electrical effects. This is, of course, the reason why you cannot see the difference between an electrically charged or uncharged thing—so few particles are involved that they hardly make a difference in the weight or size of an object. With this picture the atoms were easier to understand. They were thought to have a “nucleus” at the center, which is positively electrically charged and very massive, and the nucleus is surrounded by a certain number of “electrons” which are very light and negatively charged. Now we go a little ahead in our story to remark that in the nucleus itself there were found two kinds of particles, protons and neutrons, almost of the same weight and very heavy. The protons are electrically charged and the neutrons are neutral. If we have an atom with six protons inside its nucleus, and this is surrounded by six electrons (the negative particles in the ordinary world of matter are all electrons, and these are very light compared with the protons and neutrons which make nuclei), this would be atom number six in the chemical table, and it is called carbon. Atom number eight is called oxygen, etc., because the chemical properties depend upon the electrons on the outside, and in fact only upon how many electrons there are. So the chemical properties of a substance depend only on a number, the number of electrons. (The whole list of elements of the chemists really could have been called $1$, $2$, $3$, $4$, $5$, etc. Instead of saying “carbon,” we could say “element six,” meaning six electrons, but of course, when the elements were first discovered, it was not known that they could be numbered that way, and secondly, it would make everything look rather complicated. It is better to have names and symbols for these things, rather than to call everything by number.) More was discovered about the electrical force. The natural interpretation of electrical interaction is that two objects simply attract each other: plus against minus. However, this was discovered to be an inadequate idea to represent it. A more adequate representation of the situation is to say that the existence of the positive charge, in some sense, distorts, or creates a “condition” in space, so that when we put the negative charge in, it feels a force. This potentiality for producing a force is called an electric field. When we put an electron in an electric field, we say it is “pulled.” We then have two rules: (a) charges make a field, and (b) charges in fields have forces on them and move. The reason for this will become clear when we discuss the following phenomena: If we were to charge a body, say a comb, electrically, and then place a charged piece of paper at a distance and move the comb back and forth, the paper will respond by always pointing to the comb. If we shake it faster, it will be discovered that the paper is a little behind, there is a delay in the action. (At the first stage, when we move the comb rather slowly, we find a complication which is magnetism. Magnetic influences have to do with charges in relative motion, so magnetic forces and electric forces can really be attributed to one field, as two different aspects of exactly the same thing. A changing electric field cannot exist without magnetism.) If we move the charged paper farther out, the delay is greater. Then an interesting thing is observed. Although the forces between two charged objects should go inversely as the square of the distance, it is found, when we shake a charge, that the influence extends very much farther out than we would guess at first sight. That is, the effect falls off more slowly than the inverse square. Here is an analogy: If we are in a pool of water and there is a floating cork very close by, we can move it “directly” by pushing the water with another cork. If you looked only at the two corks, all you would see would be that one moved immediately in response to the motion of the other—there is some kind of “interaction” between them. Of course, what we really do is to disturb the water; the water then disturbs the other cork. We could make up a “law” that if you pushed the water a little bit, an object close by in the water would move. If it were farther away, of course, the second cork would scarcely move, for we move the water locally. On the other hand, if we jiggle the cork a new phenomenon is involved, in which the motion of the water moves the water there, etc., and waves travel away, so that by jiggling, there is an influence very much farther out, an oscillatory influence, that cannot be understood from the direct interaction. Therefore the idea of direct interaction must be replaced with the existence of the water, or in the electrical case, with what we call the electromagnetic field. The electromagnetic field can carry waves; some of these waves are light, others are used in radio broadcasts, but the general name is electromagnetic waves. These oscillatory waves can have various frequencies. The only thing that is really different from one wave to another is the frequency of oscillation. If we shake a charge back and forth more and more rapidly, and look at the effects, we get a whole series of different kinds of effects, which are all unified by specifying but one number, the number of oscillations per second. The usual “pickup” that we get from electric currents in the circuits in the walls of a building have a frequency of about one hundred cycles per second. If we increase the frequency to $500$ or $1000$ kilocycles ($1$ kilocycle${}=1000$ cycles) per second, we are “on the air,” for this is the frequency range which is used for radio broadcasts. (Of course it has nothing to do with the air! We can have radio broadcasts without any air.) If we again increase the frequency, we come into the range that is used for FM and TV. Going still further, we use certain short waves, for example for radar. Still higher, and we do not need an instrument to “see” the stuff, we can see it with the human eye. In the range of frequency from $5\times10^{14}$ to $10^{15}$ cycles per second our eyes would see the oscillation of the charged comb, if we could shake it that fast, as red, blue, or violet light, depending on the frequency. Frequencies below this range are called infrared, and above it, ultraviolet. The fact that we can see in a particular frequency range makes that part of the electromagnetic spectrum no more impressive than the other parts from a physicist’s standpoint, but from a human standpoint, of course, it is more interesting. If we go up even higher in frequency, we get x-rays. X-rays are nothing but very high-frequency light. If we go still higher, we get gamma rays. These two terms, x-rays and gamma rays, are used almost synonymously. Usually electromagnetic rays coming from nuclei are called gamma rays, while those of high energy from atoms are called x-rays, but at the same frequency they are indistinguishable physically, no matter what their source. If we go to still higher frequencies, say to $10^{24}$ cycles per second, we find that we can make those waves artificially, for example with the synchrotron here at Caltech. We can find electromagnetic waves with stupendously high frequencies—with even a thousand times more rapid oscillation—in the waves found in cosmic rays. These waves cannot be controlled by us. |
|
1 | 3 | The Relation of Physics to Other Sciences | 1 | Introduction | Physics is the most fundamental and all-inclusive of the sciences, and has had a profound effect on all scientific development. In fact, physics is the present-day equivalent of what used to be called natural philosophy, from which most of our modern sciences arose. Students of many fields find themselves studying physics because of the basic role it plays in all phenomena. In this chapter we shall try to explain what the fundamental problems in the other sciences are, but of course it is impossible in so small a space really to deal with the complex, subtle, beautiful matters in these other fields. Lack of space also prevents our discussing the relation of physics to engineering, industry, society, and war, or even the most remarkable relationship between mathematics and physics. (Mathematics is not a science from our point of view, in the sense that it is not a natural science. The test of its validity is not experiment.) We must, incidentally, make it clear from the beginning that if a thing is not a science, it is not necessarily bad. For example, love is not a science. So, if something is said not to be a science, it does not mean that there is something wrong with it; it just means that it is not a science. |
|
1 | 3 | The Relation of Physics to Other Sciences | 2 | Chemistry | The science which is perhaps the most deeply affected by physics is chemistry. Historically, the early days of chemistry dealt almost entirely with what we now call inorganic chemistry, the chemistry of substances which are not associated with living things. Considerable analysis was required to discover the existence of the many elements and their relationships—how they make the various relatively simple compounds found in rocks, earth, etc. This early chemistry was very important for physics. The interaction between the two sciences was very great because the theory of atoms was substantiated to a large extent by experiments in chemistry. The theory of chemistry, i.e., of the reactions themselves, was summarized to a large extent in the periodic chart of Mendeleev, which brings out many strange relationships among the various elements, and it was the collection of rules as to which substance is combined with which, and how, that constituted inorganic chemistry. All these rules were ultimately explained in principle by quantum mechanics, so that theoretical chemistry is in fact physics. On the other hand, it must be emphasized that this explanation is in principle. We have already discussed the difference between knowing the rules of the game of chess, and being able to play. So it is that we may know the rules, but we cannot play very well. It turns out to be very difficult to predict precisely what will happen in a given chemical reaction; nevertheless, the deepest part of theoretical chemistry must end up in quantum mechanics. There is also a branch of physics and chemistry which was developed by both sciences together, and which is extremely important. This is the method of statistics applied in a situation in which there are mechanical laws, which is aptly called statistical mechanics. In any chemical situation a large number of atoms are involved, and we have seen that the atoms are all jiggling around in a very random and complicated way. If we could analyze each collision, and be able to follow in detail the motion of each molecule, we might hope to figure out what would happen, but the many numbers needed to keep track of all these molecules exceeds so enormously the capacity of any computer, and certainly the capacity of the mind, that it was important to develop a method for dealing with such complicated situations. Statistical mechanics, then, is the science of the phenomena of heat, or thermodynamics. Inorganic chemistry is, as a science, now reduced essentially to what are called physical chemistry and quantum chemistry; physical chemistry to study the rates at which reactions occur and what is happening in detail (How do the molecules hit? Which pieces fly off first?, etc.), and quantum chemistry to help us understand what happens in terms of the physical laws. The other branch of chemistry is organic chemistry, the chemistry of the substances which are associated with living things. For a time it was believed that the substances which are associated with living things were so marvelous that they could not be made by hand, from inorganic materials. This is not at all true—they are just the same as the substances made in inorganic chemistry, but more complicated arrangements of atoms are involved. Organic chemistry obviously has a very close relationship to the biology which supplies its substances, and to industry, and furthermore, much physical chemistry and quantum mechanics can be applied to organic as well as to inorganic compounds. However, the main problems of organic chemistry are not in these aspects, but rather in the analysis and synthesis of the substances which are formed in biological systems, in living things. This leads imperceptibly, in steps, toward biochemistry, and then into biology itself, or molecular biology. |
|
1 | 3 | The Relation of Physics to Other Sciences | 3 | Biology | Thus we come to the science of biology, which is the study of living things. In the early days of biology, the biologists had to deal with the purely descriptive problem of finding out what living things there were, and so they just had to count such things as the hairs of the limbs of fleas. After these matters were worked out with a great deal of interest, the biologists went into the machinery inside the living bodies, first from a gross standpoint, naturally, because it takes some effort to get into the finer details. There was an interesting early relationship between physics and biology in which biology helped physics in the discovery of the conservation of energy, which was first demonstrated by Mayer in connection with the amount of heat taken in and given out by a living creature. If we look at the processes of biology of living animals more closely, we see many physical phenomena: the circulation of blood, pumps, pressure, etc. There are nerves: we know what is happening when we step on a sharp stone, and that somehow or other the information goes from the leg up. It is interesting how that happens. In their study of nerves, the biologists have come to the conclusion that nerves are very fine tubes with a complex wall which is very thin; through this wall the cell pumps ions, so that there are positive ions on the outside and negative ions on the inside, like a capacitor. Now this membrane has an interesting property; if it “discharges” in one place, i.e., if some of the ions were able to move through one place, so that the electric voltage is reduced there, that electrical influence makes itself felt on the ions in the neighborhood, and it affects the membrane in such a way that it lets the ions through at neighboring points also. This in turn affects it farther along, etc., and so there is a wave of “penetrability” of the membrane which runs down the fiber when it is “excited” at one end by stepping on the sharp stone. This wave is somewhat analogous to a long sequence of vertical dominoes; if the end one is pushed over, that one pushes the next, etc. Of course this will transmit only one message unless the dominoes are set up again; and similarly in the nerve cell, there are processes which pump the ions slowly out again, to get the nerve ready for the next impulse. So it is that we know what we are doing (or at least where we are). Of course the electrical effects associated with this nerve impulse can be picked up with electrical instruments, and because there are electrical effects, obviously the physics of electrical effects has had a great deal of influence on understanding the phenomenon. The opposite effect is that, from somewhere in the brain, a message is sent out along a nerve. What happens at the end of the nerve? There the nerve branches out into fine little things, connected to a structure near a muscle, called an endplate. For reasons which are not exactly understood, when the impulse reaches the end of the nerve, little packets of a chemical called acetylcholine are shot off (five or ten molecules at a time) and they affect the muscle fiber and make it contract—how simple! What makes a muscle contract? A muscle is a very large number of fibers close together, containing two different substances, myosin and actomyosin, but the machinery by which the chemical reaction induced by acetylcholine can modify the dimensions of the muscle is not yet known. Thus the fundamental processes in the muscle that make mechanical motions are not known. Biology is such an enormously wide field that there are hosts of other problems that we cannot mention at all—problems on how vision works (what the light does in the eye), how hearing works, etc. (The way in which thinking works we shall discuss later under psychology.) Now, these things concerning biology which we have just discussed are, from a biological standpoint, really not fundamental, at the bottom of life, in the sense that even if we understood them we still would not understand life itself. To illustrate: the men who study nerves feel their work is very important, because after all you cannot have animals without nerves. But you can have life without nerves. Plants have neither nerves nor muscles, but they are working, they are alive, just the same. So for the fundamental problems of biology we must look deeper; when we do, we discover that all living things have a great many characteristics in common. The most common feature is that they are made of cells, within each of which is complex machinery for doing things chemically. In plant cells, for example, there is machinery for picking up light and generating glucose, which is consumed in the dark to keep the plant alive. When the plant is eaten the glucose itself generates in the animal a series of chemical reactions very closely related to photosynthesis (and its opposite effect in the dark) in plants. In the cells of living systems there are many elaborate chemical reactions, in which one compound is changed into another and another. To give some impression of the enormous efforts that have gone into the study of biochemistry, the chart in Fig. 3–1 summarizes our knowledge to date on just one small part of the many series of reactions which occur in cells, perhaps a percent or so of it. Here we see a whole series of molecules which change from one to another in a sequence or cycle of rather small steps. It is called the Krebs cycle, the respiratory cycle. Each of the chemicals and each of the steps is fairly simple, in terms of what change is made in the molecule, but—and this is a centrally important discovery in biochemistry—these changes are relatively difficult to accomplish in a laboratory. If we have one substance and another very similar substance, the one does not just turn into the other, because the two forms are usually separated by an energy barrier or “hill.” Consider this analogy: If we wanted to take an object from one place to another, at the same level but on the other side of a hill, we could push it over the top, but to do so requires the addition of some energy. Thus most chemical reactions do not occur, because there is what is called an activation energy in the way. In order to add an extra atom to our chemical requires that we get it close enough that some rearrangement can occur; then it will stick. But if we cannot give it enough energy to get it close enough, it will not go to completion, it will just go part way up the “hill” and back down again. However, if we could literally take the molecules in our hands and push and pull the atoms around in such a way as to open a hole to let the new atom in, and then let it snap back, we would have found another way, around the hill, which would not require extra energy, and the reaction would go easily. Now there actually are, in the cells, very large molecules, much larger than the ones whose changes we have been describing, which in some complicated way hold the smaller molecules just right, so that the reaction can occur easily. These very large and complicated things are called enzymes. (They were first called ferments, because they were originally discovered in the fermentation of sugar. In fact, some of the first reactions in the cycle were discovered there.) In the presence of an enzyme the reaction will go. An enzyme is made of another substance called protein. Enzymes are very big and complicated, and each one is different, each being built to control a certain special reaction. The names of the enzymes are written in Fig. 3–1 at each reaction. (Sometimes the same enzyme may control two reactions.) We emphasize that the enzymes themselves are not involved in the reaction directly. They do not change; they merely let an atom go from one place to another. Having done so, the enzyme is ready to do it to the next molecule, like a machine in a factory. Of course, there must be a supply of certain atoms and a way of disposing of other atoms. Take hydrogen, for example: there are enzymes which have special units on them which carry the hydrogen for all chemical reactions. For example, there are three or four hydrogen-reducing enzymes which are used all over our cycle in different places. It is interesting that the machinery which liberates some hydrogen at one place will take that hydrogen and use it somewhere else. The most important feature of the cycle of Fig. 3–1 is the transformation from GDP to GTP (guanosine-di-phosphate to guanosine-tri-phosphate) because the one substance has much more energy in it than the other. Just as there is a “box” in certain enzymes for carrying hydrogen atoms around, there are special energy-carrying “boxes” which involve the triphosphate group. So, GTP has more energy than GDP and if the cycle is going one way, we are producing molecules which have extra energy and which can go drive some other cycle which requires energy, for example the contraction of muscle. The muscle will not contract unless there is GTP. We can take muscle fiber, put it in water, and add GTP, and the fibers contract, changing GTP to GDP if the right enzymes are present. So the real system is in the GDP-GTP transformation; in the dark the GTP which has been stored up during the day is used to run the whole cycle around the other way. An enzyme, you see, does not care in which direction the reaction goes, for if it did it would violate one of the laws of physics. Physics is of great importance in biology and other sciences for still another reason, that has to do with experimental techniques. In fact, if it were not for the great development of experimental physics, these biochemistry charts would not be known today. The reason is that the most useful tool of all for analyzing this fantastically complex system is to label the atoms which are used in the reactions. Thus, if we could introduce into the cycle some carbon dioxide which has a “green mark” on it, and then measure after three seconds where the green mark is, and again measure after ten seconds, etc., we could trace out the course of the reactions. What are the “green marks”? They are different isotopes. We recall that the chemical properties of atoms are determined by the number of electrons, not by the mass of the nucleus. But there can be, for example in carbon, six neutrons or seven neutrons, together with the six protons which all carbon nuclei have. Chemically, the two atoms C$^{12}$ and C$^{13}$ are the same, but they differ in weight and they have different nuclear properties, and so they are distinguishable. By using these isotopes of different weights, or even radioactive isotopes like C$^{14}$, which provide a more sensitive means for tracing very small quantities, it is possible to trace the reactions. Now, we return to the description of enzymes and proteins. Not all proteins are enzymes, but all enzymes are proteins. There are many proteins, such as the proteins in muscle, the structural proteins which are, for example, in cartilage and hair, skin, etc., that are not themselves enzymes. However, proteins are a very characteristic substance of life: first of all they make up all the enzymes, and second, they make up much of the rest of living material. Proteins have a very interesting and simple structure. They are a series, or chain, of different amino acids. There are twenty different amino acids, and they all can combine with each other to form chains in which the backbone is CO-NH, etc. Proteins are nothing but chains of various ones of these twenty amino acids. Each of the amino acids probably serves some special purpose. Some, for example, have a sulfur atom at a certain place; when two sulfur atoms are in the same protein, they form a bond, that is, they tie the chain together at two points and form a loop. Another has extra oxygen atoms which make it an acidic substance, another has a basic characteristic. Some of them have big groups hanging out to one side, so that they take up a lot of space. One of the amino acids, called proline, is not really an amino acid, but imino acid. There is a slight difference, with the result that when proline is in the chain, there is a kink in the chain. If we wished to manufacture a particular protein, we would give these instructions: put one of those sulfur hooks here; next, add something to take up space; then attach something to put a kink in the chain. In this way, we will get a complicated-looking chain, hooked together and having some complex structure; this is presumably just the manner in which all the various enzymes are made. One of the great triumphs in recent times (since 1960), was at last to discover the exact spatial atomic arrangement of certain proteins, which involve some fifty-six or sixty amino acids in a row. Over a thousand atoms (more nearly two thousand, if we count the hydrogen atoms) have been located in a complex pattern in two proteins. The first was hemoglobin. One of the sad aspects of this discovery is that we cannot see anything from the pattern; we do not understand why it works the way it does. Of course, that is the next problem to be attacked. Another problem is how do the enzymes know what to be? A red-eyed fly makes a red-eyed fly baby, and so the information for the whole pattern of enzymes to make red pigment must be passed from one fly to the next. This is done by a substance in the nucleus of the cell, not a protein, called DNA (short for desoxyribose nucleic acid). This is the key substance which is passed from one cell to another (for instance sperm cells consist mostly of DNA) and carries the information as to how to make the enzymes. DNA is the “blueprint.” What does the blueprint look like and how does it work? First, the blueprint must be able to reproduce itself. Secondly, it must be able to instruct the protein. Concerning the reproduction, we might think that this proceeds like cell reproduction. Cells simply grow bigger and then divide in half. Must it be thus with DNA molecules, then, that they too grow bigger and divide in half? Every atom certainly does not grow bigger and divide in half! No, it is impossible to reproduce a molecule except by some more clever way. The structure of the substance DNA was studied for a long time, first chemically to find the composition, and then with x-rays to find the pattern in space. The result was the following remarkable discovery: The DNA molecule is a pair of chains, twisted upon each other. The backbone of each of these chains, which are analogous to the chains of proteins but chemically quite different, is a series of sugar and phosphate groups, as shown in Fig. 3–2. Now we see how the chain can contain instructions, for if we could split this chain down the middle, we would have a series $BAADC\ldots$ and every living thing could have a different series. Thus perhaps, in some way, the specific instructions for the manufacture of proteins are contained in the specific series of the DNA. Attached to each sugar along the line, and linking the two chains together, are certain pairs of cross-links. However, they are not all of the same kind; there are four kinds, called adenine, thymine, cytosine, and guanine, but let us call them $A$, $B$, $C$, and $D$. The interesting thing is that only certain pairs can sit opposite each other, for example $A$ with $B$ and $C$ with $D$. These pairs are put on the two chains in such a way that they “fit together,” and have a strong energy of interaction. However, $C$ will not fit with $A$, and $B$ will not fit with $C$; they will only fit in pairs, $A$ against $B$ and $C$ against $D$. Therefore if one is $C$, the other must be $D$, etc. Whatever the letters may be in one chain, each one must have its specific complementary letter on the other chain. What then about reproduction? Suppose we split this chain in two. How can we make another one just like it? If, in the substances of the cells, there is a manufacturing department which brings up phosphate, sugar, and $A$, $B$, $C$, $D$ units not connected in a chain, the only ones which will attach to our split chain will be the correct ones, the complements of $BAADC\ldots$, namely, $ABBCD\ldots$ Thus what happens is that the chain splits down the middle during cell division, one half ultimately to go with one cell, the other half to end up in the other cell; when separated, a new complementary chain is made by each half-chain. Next comes the question, precisely how does the order of the $A$, $B$, $C$, $D$ units determine the arrangement of the amino acids in the protein? This is the central unsolved problem in biology today. The first clues, or pieces of information, however, are these: There are in the cell tiny particles called ribosomes, and it is now known that that is the place where proteins are made. But the ribosomes are not in the nucleus, where the DNA and its instructions are. Something seems to be the matter. However, it is also known that little molecule pieces come off the DNA—not as long as the big DNA molecule that carries all the information itself, but like a small section of it. This is called RNA, but that is not essential. It is a kind of copy of the DNA, a short copy. The RNA, which somehow carries a message as to what kind of protein to make goes over to the ribosome; that is known. When it gets there, protein is synthesized at the ribosome. That is also known. However, the details of how the amino acids come in and are arranged in accordance with a code that is on the RNA are, as yet, still unknown. We do not know how to read it. If we knew, for example, the “lineup” $A$, $B$, $C$, $C$, $A$, we could not tell you what protein is to be made. Certainly no subject or field is making more progress on so many fronts at the present moment, than biology, and if we were to name the most powerful assumption of all, which leads one on and on in an attempt to understand life, it is that all things are made of atoms, and that everything that living things do can be understood in terms of the jigglings and wigglings of atoms. |
|
1 | 4 | Conservation of Energy | 1 | What is energy? | In this chapter, we begin our more detailed study of the different aspects of physics, having finished our description of things in general. To illustrate the ideas and the kind of reasoning that might be used in theoretical physics, we shall now examine one of the most basic laws of physics, the conservation of energy. There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law—it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in the manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same. (Something like the bishop on a red square, and after a number of moves—details unknown—it is still on some red square. It is a law of this nature.) Since it is an abstract idea, we shall illustrate the meaning of it by an analogy. Imagine a child, perhaps “Dennis the Menace,” who has blocks which are absolutely indestructible, and cannot be divided into pieces. Each is the same as the other. Let us suppose that he has $28$ blocks. His mother puts him with his $28$ blocks into a room at the beginning of the day. At the end of the day, being curious, she counts the blocks very carefully, and discovers a phenomenal law—no matter what he does with the blocks, there are always $28$ remaining! This continues for a number of days, until one day there are only $27$ blocks, but a little investigating shows that there is one under the rug—she must look everywhere to be sure that the number of blocks has not changed. One day, however, the number appears to change—there are only $26$ blocks. Careful investigation indicates that the window was open, and upon looking outside, the other two blocks are found. Another day, careful count indicates that there are $30$ blocks! This causes considerable consternation, until it is realized that Bruce came to visit, bringing his blocks with him, and he left a few at Dennis’ house. After she has disposed of the extra blocks, she closes the window, does not let Bruce in, and then everything is going along all right, until one time she counts and finds only $25$ blocks. However, there is a box in the room, a toy box, and the mother goes to open the toy box, but the boy says “No, do not open my toy box,” and screams. Mother is not allowed to open the toy box. Being extremely curious, and somewhat ingenious, she invents a scheme! She knows that a block weighs three ounces, so she weighs the box at a time when she sees $28$ blocks, and it weighs $16$ ounces. The next time she wishes to check, she weighs the box again, subtracts sixteen ounces and divides by three. She discovers the following: \begin{equation} \label{Eq:I:4:1} \begin{pmatrix} \text{number of}\\ \text{blocks seen} \end{pmatrix}+ \frac{(\text{weight of box})-\text{$16$ ounces}}{\text{$3$ ounces}}= \text{constant}. \end{equation}
\begin{align} \begin{pmatrix} \text{number of}\\ \text{blocks seen} \end{pmatrix}&+ \frac{(\text{weight of box})-\text{$16$ ounces}}{\text{$3$ ounces}}\notag\\[1ex] \label{Eq:I:4:1} &=\text{constant}. \end{align} There then appear to be some new deviations, but careful study indicates that the dirty water in the bathtub is changing its level. The child is throwing blocks into the water, and she cannot see them because it is so dirty, but she can find out how many blocks are in the water by adding another term to her formula. Since the original height of the water was $6$ inches and each block raises the water a quarter of an inch, this new formula would be: \begin{align} \begin{pmatrix} \text{number of}\\ \text{blocks seen} \end{pmatrix}&+ \frac{(\text{weight of box})-\text{$16$ ounces}} {\text{$3$ ounces}}\notag\\[1ex] \label{Eq:I:4:2} &+\frac{(\text{height of water})-\text{$6$ inches}} {\text{$1/4$ inch}}= \text{constant}. \end{align}
\begin{align} \begin{pmatrix} \text{number of}\\ \text{blocks seen} \end{pmatrix}&+ \frac{(\text{weight of box})-\text{$16$ ounces}} {\text{$3$ ounces}}\notag\\[1ex] &+\frac{(\text{height of water})-\text{$6$ inches}} {\text{$1/4$ inch}}\notag\\[2ex] \label{Eq:I:4:2} &=\text{constant}. \end{align} In the gradual increase in the complexity of her world, she finds a whole series of terms representing ways of calculating how many blocks are in places where she is not allowed to look. As a result, she finds a complex formula, a quantity which has to be computed, which always stays the same in her situation. What is the analogy of this to the conservation of energy? The most remarkable aspect that must be abstracted from this picture is that there are no blocks. Take away the first terms in (4.1) and (4.2) and we find ourselves calculating more or less abstract things. The analogy has the following points. First, when we are calculating the energy, sometimes some of it leaves the system and goes away, or sometimes some comes in. In order to verify the conservation of energy, we must be careful that we have not put any in or taken any out. Second, the energy has a large number of different forms, and there is a formula for each one. These are: gravitational energy, kinetic energy, heat energy, elastic energy, electrical energy, chemical energy, radiant energy, nuclear energy, mass energy. If we total up the formulas for each of these contributions, it will not change except for energy going in and out. It is important to realize that in physics today, we have no knowledge of what energy is. We do not have a picture that energy comes in little blobs of a definite amount. It is not that way. However, there are formulas for calculating some numerical quantity, and when we add it all together it gives “$28$”—always the same number. It is an abstract thing in that it does not tell us the mechanism or the reasons for the various formulas. |
|
1 | 4 | Conservation of Energy | 2 | Gravitational potential energy | Conservation of energy can be understood only if we have the formula for all of its forms. I wish to discuss the formula for gravitational energy near the surface of the Earth, and I wish to derive this formula in a way which has nothing to do with history but is simply a line of reasoning invented for this particular lecture to give you an illustration of the remarkable fact that a great deal about nature can be extracted from a few facts and close reasoning. It is an illustration of the kind of work theoretical physicists become involved in. It is patterned after a most excellent argument by Mr. Carnot on the efficiency of steam engines.1 Consider weight-lifting machines—machines which have the property that they lift one weight by lowering another. Let us also make a hypothesis: that there is no such thing as perpetual motion with these weight-lifting machines. (In fact, that there is no perpetual motion at all is a general statement of the law of conservation of energy.) We must be careful to define perpetual motion. First, let us do it for weight-lifting machines. If, when we have lifted and lowered a lot of weights and restored the machine to the original condition, we find that the net result is to have lifted a weight, then we have a perpetual motion machine because we can use that lifted weight to run something else. That is, provided the machine which lifted the weight is brought back to its exact original condition, and furthermore that it is completely self-contained—that it has not received the energy to lift that weight from some external source—like Bruce’s blocks. A very simple weight-lifting machine is shown in Fig. 4–1. This machine lifts weights three units “strong.” We place three units on one balance pan, and one unit on the other. However, in order to get it actually to work, we must lift a little weight off the left pan. On the other hand, we could lift a one-unit weight by lowering the three-unit weight, if we cheat a little by lifting a little weight off the other pan. Of course, we realize that with any actual lifting machine, we must add a little extra to get it to run. This we disregard, temporarily. Ideal machines, although they do not exist, do not require anything extra. A machine that we actually use can be, in a sense, almost reversible: that is, if it will lift the weight of three by lowering a weight of one, then it will also lift nearly the weight of one the same amount by lowering the weight of three. We imagine that there are two classes of machines, those that are not reversible, which includes all real machines, and those that are reversible, which of course are actually not attainable no matter how careful we may be in our design of bearings, levers, etc. We suppose, however, that there is such a thing—a reversible machine—which lowers one unit of weight (a pound or any other unit) by one unit of distance, and at the same time lifts a three-unit weight. Call this reversible machine, Machine $A$. Suppose this particular reversible machine lifts the three-unit weight a distance $X$. Then suppose we have another machine, Machine $B$, which is not necessarily reversible, which also lowers a unit weight a unit distance, but which lifts three units a distance $Y$. We can now prove that $Y$ is not higher than $X$; that is, it is impossible to build a machine that will lift a weight any higher than it will be lifted by a reversible machine. Let us see why. Let us suppose that $Y$ were higher than $X$. We take a one-unit weight and lower it one unit height with Machine $B$, and that lifts the three-unit weight up a distance $Y$. Then we could lower the weight from $Y$ to $X$, obtaining free power, and use the reversible Machine $A$, running backwards, to lower the three-unit weight a distance $X$ and lift the one-unit weight by one unit height. This will put the one-unit weight back where it was before, and leave both machines ready to be used again! We would therefore have perpetual motion if $Y$ were higher than $X$, which we assumed was impossible. With those assumptions, we thus deduce that $Y$ is not higher than $X$, so that of all machines that can be designed, the reversible machine is the best. We can also see that all reversible machines must lift to exactly the same height. Suppose that $B$ were really reversible also. The argument that $Y$ is not higher than $X$ is, of course, just as good as it was before, but we can also make our argument the other way around, using the machines in the opposite order, and prove that $X$ is not higher than $Y$. This, then, is a very remarkable observation because it permits us to analyze the height to which different machines are going to lift something without looking at the interior mechanism. We know at once that if somebody makes an enormously elaborate series of levers that lift three units a certain distance by lowering one unit by one unit distance, and we compare it with a simple lever which does the same thing and is fundamentally reversible, his machine will lift it no higher, but perhaps less high. If his machine is reversible, we also know exactly how high it will lift. To summarize: every reversible machine, no matter how it operates, which drops one pound one foot and lifts a three-pound weight always lifts it the same distance, $X$. This is clearly a universal law of great utility. The next question is, of course, what is $X$? Suppose we have a reversible machine which is going to lift this distance $X$, three for one. We set up three balls in a rack which does not move, as shown in Fig. 4–2. One ball is held on a stage at a distance one foot above the ground. The machine can lift three balls, lowering one by a distance $1$. Now, we have arranged that the platform which holds three balls has a floor and two shelves, exactly spaced at distance $X$, and further, that the rack which holds the balls is spaced at distance $X$, (a). First we roll the balls horizontally from the rack to the shelves, (b), and we suppose that this takes no energy because we do not change the height. The reversible machine then operates: it lowers the single ball to the floor, and it lifts the rack a distance $X$, (c). Now we have ingeniously arranged the rack so that these balls are again even with the platforms. Thus we unload the balls onto the rack, (d); having unloaded the balls, we can restore the machine to its original condition. Now we have three balls on the upper three shelves and one at the bottom. But the strange thing is that, in a certain way of speaking, we have not lifted two of them at all because, after all, there were balls on shelves $2$ and $3$ before. The resulting effect has been to lift one ball a distance $3X$. Now, if $3X$ exceeds one foot, then we can lower the ball to return the machine to the initial condition, (f), and we can run the apparatus again. Therefore $3X$ cannot exceed one foot, for if $3X$ exceeds one foot we can make perpetual motion. Likewise, we can prove that one foot cannot exceed $3X$, by making the whole machine run the opposite way, since it is a reversible machine. Therefore $3X$ is neither greater nor less than a foot, and we discover then, by argument alone, the law that $X=\tfrac{1}{3}$ foot. The generalization is clear: one pound falls a certain distance in operating a reversible machine; then the machine can lift $p$ pounds this distance divided by $p$. Another way of putting the result is that three pounds times the height lifted, which in our problem was $X$, is equal to one pound times the distance lowered, which is one foot in this case. If we take all the weights and multiply them by the heights at which they are now, above the floor, let the machine operate, and then multiply all the weights by all the heights again, there will be no change. (We have to generalize the example where we moved only one weight to the case where when we lower one we lift several different ones—but that is easy.) We call the sum of the weights times the heights gravitational potential energy—the energy which an object has because of its relationship in space, relative to the earth. The formula for gravitational energy, then, so long as we are not too far from the earth (the force weakens as we go higher) is \begin{equation} \label{Eq:I:4:3} \begin{pmatrix} \text{gravitational}\\ \text{potential energy}\\ \text{for one object} \end{pmatrix}= (\text{weight})\times(\text{height}). \end{equation} It is a very beautiful line of reasoning. The only problem is that perhaps it is not true. (After all, nature does not have to go along with our reasoning.) For example, perhaps perpetual motion is, in fact, possible. Some of the assumptions may be wrong, or we may have made a mistake in reasoning, so it is always necessary to check. It turns out experimentally, in fact, to be true. The general name of energy which has to do with location relative to something else is called potential energy. In this particular case, of course, we call it gravitational potential energy. If it is a question of electrical forces against which we are working, instead of gravitational forces, if we are “lifting” charges away from other charges with a lot of levers, then the energy content is called electrical potential energy. The general principle is that the change in the energy is the force times the distance that the force is pushed, and that this is a change in energy in general: \begin{equation} \label{Eq:I:4:4} \begin{pmatrix} \text{change in}\\ \text{energy} \end{pmatrix}= (\text{force})\times \begin{pmatrix} \text{distance force}\\ \text{acts through} \end{pmatrix}. \end{equation} We will return to many of these other kinds of energy as we continue the course. The principle of the conservation of energy is very useful for deducing what will happen in a number of circumstances. In high school we learned a lot of laws about pulleys and levers used in different ways. We can now see that these “laws” are all the same thing, and that we did not have to memorize $75$ rules to figure it out. A simple example is a smooth inclined plane which is, happily, a three-four-five triangle (Fig. 4–3). We hang a one-pound weight on the inclined plane with a pulley, and on the other side of the pulley, a weight $W$. We want to know how heavy $W$ must be to balance the one pound on the plane. How can we figure that out? If we say it is just balanced, it is reversible and so can move up and down, and we can consider the following situation. In the initial circumstance, (a), the one pound weight is at the bottom and weight $W$ is at the top. When $W$ has slipped down in a reversible way, (b), we have a one-pound weight at the top and the weight $W$ the slant distance, or five feet, from the plane in which it was before. We lifted the one-pound weight only three feet and we lowered $W$ pounds by five feet. Therefore $W=\tfrac{3}{5}$ of a pound. Note that we deduced this from the conservation of energy, and not from force components. Cleverness, however, is relative. It can be deduced in a way which is even more brilliant, discovered by Stevinus and inscribed on his tombstone.2 Figure 4–4 explains that it has to be $\tfrac{3}{5}$ of a pound, because the chain does not go around. It is evident that the lower part of the chain is balanced by itself, so that the pull of the five weights on one side must balance the pull of three weights on the other, or whatever the ratio of the legs. You see, by looking at this diagram, that $W$ must be $\tfrac{3}{5}$ of a pound. (If you get an epitaph like that on your gravestone, you are doing fine.) Let us now illustrate the energy principle with a more complicated problem, the screw jack shown in Fig. 4–5. A handle $20$ inches long is used to turn the screw, which has $10$ threads to the inch. We would like to know how much force would be needed at the handle to lift one ton ($2000$ pounds). If we want to lift the ton one inch, say, then we must turn the handle around ten times. When it goes around once it goes approximately $126$ inches. The handle must thus travel $1260$ inches, and if we used various pulleys, etc., we would be lifting our one ton with an unknown smaller weight $W$ applied to the end of the handle. So we find out that $W$ is about $1.6$ pounds. This is a result of the conservation of energy. Take now the somewhat more complicated example shown in Fig. 4–6. A rod or bar, $8$ feet long, is supported at one end. In the middle of the bar is a weight of $60$ pounds, and at a distance of two feet from the support there is a weight of $100$ pounds. How hard do we have to lift the end of the bar in order to keep it balanced, disregarding the weight of the bar? Suppose we put a pulley at one end and hang a weight on the pulley. How big would the weight $W$ have to be in order for it to balance? We imagine that the weight falls any arbitrary distance—to make it easy for ourselves suppose it goes down $4$ inches—how high would the two load weights rise? The center rises $2$ inches, and the point a quarter of the way from the fixed end lifts $1$ inch. Therefore, the principle that the sum of the heights times the weights does not change tells us that the weight $W$ times $4$ inches down, plus $60$ pounds times $2$ inches up, plus $100$ pounds times $1$ inch has to add up to nothing: \begin{equation} \label{Eq:I:4:5} -4W+(2)(60)+(1)(100)=0,\quad W=\text{$55$ lb}. \end{equation}
\begin{equation} \begin{gathered} -4W+(2)(60)+(1)(100)=0,\\[.5ex] W=\text{$55$ lb}. \end{gathered} \label{Eq:I:4:5} \end{equation} Thus we must have a $55$-pound weight to balance the bar. In this way we can work out the laws of “balance”—the statics of complicated bridge arrangements, and so on. This approach is called the principle of virtual work, because in order to apply this argument we had to imagine that the structure moves a little—even though it is not really moving or even movable. We use the very small imagined motion to apply the principle of conservation of energy. |
|
1 | 4 | Conservation of Energy | 3 | Kinetic energy | To illustrate another type of energy we consider a pendulum (Fig. 4–7). If we pull the mass aside and release it, it swings back and forth. In its motion, it loses height in going from either end to the center. Where does the potential energy go? Gravitational energy disappears when it is down at the bottom; nevertheless, it will climb up again. The gravitational energy must have gone into another form. Evidently it is by virtue of its motion that it is able to climb up again, so we have the conversion of gravitational energy into some other form when it reaches the bottom. We must get a formula for the energy of motion. Now, recalling our arguments about reversible machines, we can easily see that in the motion at the bottom must be a quantity of energy which permits it to rise a certain height, and which has nothing to do with the machinery by which it comes up or the path by which it comes up. So we have an equivalence formula something like the one we wrote for the child’s blocks. We have another form to represent the energy. It is easy to say what it is. The kinetic energy at the bottom equals the weight times the height that it could go, corresponding to its velocity: $\text{K.E.}= WH$. What we need is the formula which tells us the height by some rule that has to do with the motion of objects. If we start something out with a certain velocity, say straight up, it will reach a certain height; we do not know what it is yet, but it depends on the velocity—there is a formula for that. Then to find the formula for kinetic energy for an object moving with velocity $V$, we must calculate the height that it could reach, and multiply by the weight. We shall soon find that we can write it this way: \begin{equation} \label{Eq:I:4:6} \text{K.E.}=WV^2/2g. \end{equation} Of course, the fact that motion has energy has nothing to do with the fact that we are in a gravitational field. It makes no difference where the motion came from. This is a general formula for various velocities. Both (4.3) and (4.6) are approximate formulas, the first because it is incorrect when the heights are great, i.e., when the heights are so high that gravity is weakening; the second, because of the relativistic correction at high speeds. However, when we do finally get the exact formula for the energy, then the law of conservation of energy is correct. |
|
1 | 4 | Conservation of Energy | 4 | Other forms of energy | We can continue in this way to illustrate the existence of energy in other forms. First, consider elastic energy. If we pull down on a spring, we must do some work, for when we have it down, we can lift weights with it. Therefore in its stretched condition it has a possibility of doing some work. If we were to evaluate the sums of weights times heights, it would not check out—we must add something else to account for the fact that the spring is under tension. Elastic energy is the formula for a spring when it is stretched. How much energy is it? If we let go, the elastic energy, as the spring passes through the equilibrium point, is converted to kinetic energy and it goes back and forth between compressing or stretching the spring and kinetic energy of motion. (There is also some gravitational energy going in and out, but we can do this experiment “sideways” if we like.) It keeps going until the losses—Aha! We have cheated all the way through by putting on little weights to move things or saying that the machines are reversible, or that they go on forever, but we can see that things do stop, eventually. Where is the energy when the spring has finished moving up and down? This brings in another form of energy: heat energy. Inside a spring or a lever there are crystals which are made up of lots of atoms, and with great care and delicacy in the arrangement of the parts one can try to adjust things so that as something rolls on something else, none of the atoms do any jiggling at all. But one must be very careful. Ordinarily when things roll, there is bumping and jiggling because of the irregularities of the material, and the atoms start to wiggle inside. So we lose track of that energy; we find the atoms are wiggling inside in a random and confused manner after the motion slows down. There is still kinetic energy, all right, but it is not associated with visible motion. What a dream! How do we know there is still kinetic energy? It turns out that with thermometers you can find out that, in fact, the spring or the lever is warmer, and that there is really an increase of kinetic energy by a definite amount. We call this form of energy heat energy, but we know that it is not really a new form, it is just kinetic energy—internal motion. (One of the difficulties with all these experiments with matter that we do on a large scale is that we cannot really demonstrate the conservation of energy and we cannot really make our reversible machines, because every time we move a large clump of stuff, the atoms do not remain absolutely undisturbed, and so a certain amount of random motion goes into the atomic system. We cannot see it, but we can measure it with thermometers, etc.) There are many other forms of energy, and of course we cannot describe them in any more detail just now. There is electrical energy, which has to do with pushing and pulling by electric charges. There is radiant energy, the energy of light, which we know is a form of electrical energy because light can be represented as wigglings in the electromagnetic field. There is chemical energy, the energy which is released in chemical reactions. Actually, elastic energy is, to a certain extent, like chemical energy, because chemical energy is the energy of the attraction of the atoms, one for the other, and so is elastic energy. Our modern understanding is the following: chemical energy has two parts, kinetic energy of the electrons inside the atoms, so part of it is kinetic, and electrical energy of interaction of the electrons and the protons—the rest of it, therefore, is electrical. Next we come to nuclear energy, the energy which is involved with the arrangement of particles inside the nucleus, and we have formulas for that, but we do not have the fundamental laws. We know that it is not electrical, not gravitational, and not purely kinetic, but we do not know what it is. It seems to be an additional form of energy. Finally, associated with the relativity theory, there is a modification of the laws of kinetic energy, or whatever you wish to call it, so that kinetic energy is combined with another thing called mass energy. An object has energy from its sheer existence. If I have a positron and an electron, standing still doing nothing—never mind gravity, never mind anything—and they come together and disappear, radiant energy will be liberated, in a definite amount, and the amount can be calculated. All we need know is the mass of the object. It does not depend on what it is—we make two things disappear, and we get a certain amount of energy. The formula was first found by Einstein; it is $E=mc^2$. It is obvious from our discussion that the law of conservation of energy is enormously useful in making analyses, as we have illustrated in a few examples without knowing all the formulas. If we had all the formulas for all kinds of energy, we could analyze how many processes should work without having to go into the details. Therefore conservation laws are very interesting. The question naturally arises as to what other conservation laws there are in physics. There are two other conservation laws which are analogous to the conservation of energy. One is called the conservation of linear momentum. The other is called the conservation of angular momentum. We will find out more about these later. In the last analysis, we do not understand the conservation laws deeply. We do not understand the conservation of energy. We do not understand energy as a certain number of little blobs. You may have heard that photons come out in blobs and that the energy of a photon is Planck’s constant times the frequency. That is true, but since the frequency of light can be anything, there is no law that says that energy has to be a certain definite amount. Unlike Dennis’ blocks, there can be any amount of energy, at least as presently understood. So we do not understand this energy as counting something at the moment, but just as a mathematical quantity, which is an abstract and rather peculiar circumstance. In quantum mechanics it turns out that the conservation of energy is very closely related to another important property of the world, things do not depend on the absolute time. We can set up an experiment at a given moment and try it out, and then do the same experiment at a later moment, and it will behave in exactly the same way. Whether this is strictly true or not, we do not know. If we assume that it is true, and add the principles of quantum mechanics, then we can deduce the principle of the conservation of energy. It is a rather subtle and interesting thing, and it is not easy to explain. The other conservation laws are also linked together. The conservation of momentum is associated in quantum mechanics with the proposition that it makes no difference where you do the experiment, the results will always be the same. As independence in space has to do with the conservation of momentum, independence of time has to do with the conservation of energy, and finally, if we turn our apparatus, this too makes no difference, and so the invariance of the world to angular orientation is related to the conservation of angular momentum. Besides these, there are three other conservation laws, that are exact so far as we can tell today, which are much simpler to understand because they are in the nature of counting blocks. The first of the three is the conservation of charge, and that merely means that you count how many positive, minus how many negative electrical charges you have, and the number is never changed. You may get rid of a positive with a negative, but you do not create any net excess of positives over negatives. Two other laws are analogous to this one—one is called the conservation of baryons. There are a number of strange particles, a neutron and a proton are examples, which are called baryons. In any reaction whatever in nature, if we count how many baryons are coming into a process, the number of baryons3 which come out will be exactly the same. There is another law, the conservation of leptons. We can say that the group of particles called leptons are: electron, muon, and neutrino. There is an antielectron which is a positron, that is, a $-1$ lepton. Counting the total number of leptons in a reaction reveals that the number in and out never changes, at least so far as we know at present. These are the six conservation laws, three of them subtle, involving space and time, and three of them simple, in the sense of counting something. With regard to the conservation of energy, we should note that available energy is another matter—there is a lot of jiggling around in the atoms of the water of the sea, because the sea has a certain temperature, but it is impossible to get them herded into a definite motion without taking energy from somewhere else. That is, although we know for a fact that energy is conserved, the energy available for human utility is not conserved so easily. The laws which govern how much energy is available are called the laws of thermodynamics and involve a concept called entropy for irreversible thermodynamic processes. Finally, we remark on the question of where we can get our supplies of energy today. Our supplies of energy are from the sun, rain, coal, uranium, and hydrogen. The sun makes the rain, and the coal also, so that all these are from the sun. Although energy is conserved, nature does not seem to be interested in it; she liberates a lot of energy from the sun, but only one part in two billion falls on the earth. Nature has conservation of energy, but does not really care; she spends a lot of it in all directions. We have already obtained energy from uranium; we can also get energy from hydrogen, but at present only in an explosive and dangerous condition. If it can be controlled in thermonuclear reactions, it turns out that the energy that can be obtained from $10$ quarts of water per second is equal to all of the electrical power generated in the United States. With $150$ gallons of running water a minute, you have enough fuel to supply all the energy which is used in the United States today! Therefore it is up to the physicist to figure out how to liberate us from the need for having energy. It can be done. |
|
1 | 5 | Time and Distance | 1 | Motion | In this chapter we shall consider some aspects of the concepts of time and distance. It has been emphasized earlier that physics, as do all the sciences, depends on observation. One might also say that the development of the physical sciences to their present form has depended to a large extent on the emphasis which has been placed on the making of quantitative observations. Only with quantitative observations can one arrive at quantitative relationships, which are the heart of physics. Many people would like to place the beginnings of physics with the work done 350 years ago by Galileo, and to call him the first physicist. Until that time, the study of motion had been a philosophical one based on arguments that could be thought up in one’s head. Most of the arguments had been presented by Aristotle and other Greek philosophers, and were taken as “proven.” Galileo was skeptical, and did an experiment on motion which was essentially this: He allowed a ball to roll down an inclined trough and observed the motion. He did not, however, just look; he measured how far the ball went in how long a time. The way to measure a distance was well known long before Galileo, but there were no accurate ways of measuring time, particularly short times. Although he later devised more satisfactory clocks (though not like the ones we know), Galileo’s first experiments on motion were done by using his pulse to count off equal intervals of time. Let us do the same. We may count off beats of a pulse as the ball rolls down the track: “one … two … three … four … five … six … seven … eight …” We ask a friend to make a small mark at the location of the ball at each count; we can then measure the distance the ball travelled from the point of release in one, or two, or three, etc., equal intervals of time. Galileo expressed the result of his observations in this way: if the location of the ball is marked at $1$, $2$, $3$, $4$, … units of time from the instant of its release, those marks are distant from the starting point in proportion to the numbers $1$, $4$, $9$, $16$, … Today we would say the distance is proportional to the square of the time: \begin{equation*} D\propto t^2. \end{equation*} The study of motion, which is basic to all of physics, treats with the questions: where? and when? |
|
1 | 5 | Time and Distance | 2 | Time | Let us consider first what we mean by time. What is time? It would be nice if we could find a good definition of time. Webster defines “a time” as “a period,” and the latter as “a time,” which doesn’t seem to be very useful. Perhaps we should say: “Time is what happens when nothing else happens.” Which also doesn’t get us very far. Maybe it is just as well if we face the fact that time is one of the things we probably cannot define (in the dictionary sense), and just say that it is what we already know it to be: it is how long we wait! What really matters anyway is not how we define time, but how we measure it. One way of measuring time is to utilize something which happens over and over again in a regular fashion—something which is periodic. For example, a day. A day seems to happen over and over again. But when you begin to think about it, you might well ask: “Are days periodic; are they regular? Are all days the same length?” One certainly has the impression that days in summer are longer than days in winter. Of course, some of the days in winter seem to get awfully long if one is very bored. You have certainly heard someone say, “My, but this has been a long day!” It does seem, however, that days are about the same length on the average. Is there any way we can test whether the days are the same length—either from one day to the next, or at least on the average? One way is to make a comparison with some other periodic phenomenon. Let us see how such a comparison might be made with an hour glass. With an hour glass, we can “create” a periodic occurrence if we have someone standing by it day and night to turn it over whenever the last grain of sand runs out. We could then count the turnings of the glass from each morning to the next. We would find, this time, that the number of “hours” (i.e., turnings of the glass) was not the same each “day.” We should distrust the sun, or the glass, or both. After some thought, it might occur to us to count the “hours” from noon to noon. (Noon is here defined not as 12:00 o’clock, but that instant when the sun is at its highest point.) We would find, this time, that the number of “hours” each day is the same. We now have some confidence that both the “hour” and the “day” have a regular periodicity, i.e., mark off successive equal intervals of time, although we have not proved that either one is “really” periodic. Someone might question whether there might not be some omnipotent being who would slow down the flow of sand every night and speed it up during the day. Our experiment does not, of course, give us an answer to this sort of question. All we can say is that we find that a regularity of one kind fits together with a regularity of another kind. We can just say that we base our definition of time on the repetition of some apparently periodic event. |
|
1 | 5 | Time and Distance | 3 | Short times | We should now notice that in the process of checking on the reproducibility of the day, we have received an important by-product. We have found a way of measuring, more accurately, fractions of a day. We have found a way of counting time in smaller pieces. Can we carry the process further, and learn to measure even smaller intervals of time? Galileo decided that a given pendulum always swings back and forth in equal intervals of time so long as the size of the swing is kept small. A test comparing the number of swings of a pendulum in one “hour” shows that such is indeed the case. We can in this way mark fractions of an hour. If we use a mechanical device to count the swings—and to keep them going—we have the pendulum clock of our grandfathers. Let us agree that if our pendulum oscillates $3600$ times in one hour (and if there are $24$ such hours in a day), we shall call each period of the pendulum one “second.” We have then divided our original unit of time into approximately $10^5$ parts. We can apply the same principles to divide the second into smaller and smaller intervals. It is, you will realize, not practical to make mechanical pendulums which go arbitrarily fast, but we can now make electrical pendulums, called oscillators, which can provide a periodic occurrence with a very short period of swing. In these electronic oscillators it is an electrical current which swings to and fro, in a manner analogous to the swinging of the bob of the pendulum. We can make a series of such electronic oscillators, each with a period $10$ times shorter than the previous one. We may “calibrate” each oscillator against the next slower one by counting the number of swings it makes for one swing of the slower oscillator. When the period of oscillation of our clock is shorter than a fraction of a second, we cannot count the oscillations without the help of some device which extends our powers of observation. One such device is the electron-beam oscilloscope, which acts as a sort of microscope for short times. This device plots on a fluorescent screen a graph of electrical current (or voltage) versus time. By connecting the oscilloscope to two of our oscillators in sequence, so that it plots a graph first of the current in one of our oscillators and then of the current in the other, we get two graphs like those shown in Fig. 5–2. We can readily determine the number of periods of the faster oscillator in one period of the slower oscillator. With modern electronic techniques, oscillators have been built with periods as short as about $10^{-12}$ second, and they have been calibrated (by comparison methods such as we have described) in terms of our standard unit of time, the second. With the invention and perfection of the “laser,” or light amplifier, in the past few years, it has become possible to make oscillators with even shorter periods than $10^{-12}$ second, but it has not yet been possible to calibrate them by the methods which have been described, although it will no doubt soon be possible. Times shorter than $10^{-12}$ second have been measured, but by a different technique. In effect, a different definition of “time” has been used. One way has been to observe the distance between two happenings on a moving object. If, for example, the headlights of a moving automobile are turned on and then off, we can figure out how long the lights were on if we know where they were turned on and off and how fast the car was moving. The time is the distance over which the lights were on divided by the speed. Within the past few years, just such a technique was used to measure the lifetime of the $\pi^0$-meson. By observing in a microscope the minute tracks left in a photographic emulsion in which $\pi^0$-mesons had been created one saw that a $\pi^0$-meson (known to be travelling at a certain speed nearly that of light) went a distance of about $10^{-7}$ meter, on the average, before disintegrating. It lived for only about $10^{-16}$ sec. It should be emphasized that we have here used a somewhat different definition of “time” than before. So long as there are no inconsistencies in our understanding, however, we feel fairly confident that our definitions are sufficiently equivalent. By extending our techniques—and if necessary our definitions—still further we can infer the time duration of still faster physical events. We can speak of the period of a nuclear vibration. We can speak of the lifetime of the newly discovered strange resonances (particles) mentioned in Chapter 2. Their complete life occupies a time span of only $10^{-24}$ second, approximately the time it would take light (which moves at the fastest known speed) to cross the nucleus of hydrogen (the smallest known object). What about still smaller times? Does “time” exist on a still smaller scale? Does it make any sense to speak of smaller times if we cannot measure—or perhaps even think sensibly about—something which happens in a shorter time? Perhaps not. These are some of the open questions which you will be asking and perhaps answering in the next twenty or thirty years. |
|
1 | 5 | Time and Distance | 4 | Long times | Let us now consider times longer than one day. Measurement of longer times is easy; we just count the days—so long as there is someone around to do the counting. First we find that there is another natural periodicity: the year, about $365$ days. We have also discovered that nature has sometimes provided a counter for the years, in the form of tree rings or river-bottom sediments. In some cases we can use these natural time markers to determine the time which has passed since some early event. When we cannot count the years for the measurement of long times, we must look for other ways to measure. One of the most successful is the use of radioactive material as a “clock.” In this case we do not have a periodic occurrence, as for the day or the pendulum, but a new kind of “regularity.” We find that the radioactivity of a particular sample of material decreases by the same fraction for successive equal increases in its age. If we plot a graph of the radioactivity observed as a function of time (say in days), we obtain a curve like that shown in Fig. 5–3. We observe that if the radioactivity decreases to one-half in $T$ days (called the “half-life”), then it decreases to one-quarter in another $T$ days, and so on. In an arbitrary time interval $t$ there are $t/T$ “half-lives,” and the fraction left after this time $t$ is $(\tfrac{1}{2})^{t/T}$. If we knew that a piece of material, say a piece of wood, had contained an amount $A$ of radioactive material when it was formed, and we found out by a direct measurement that it now contains the amount $B$, we could compute the age of the object, $t$, by solving the equation \begin{equation*} (\tfrac{1}{2})^{t/T}=B/A. \end{equation*} There are, fortunately, cases in which we can know the amount of radioactivity that was in an object when it was formed. We know, for example, that the carbon dioxide in the air contains a certain small fraction of the radioactive carbon isotope C$^{14}$ (replenished continuously by the action of cosmic rays). If we measure the total carbon content of an object, we know that a certain fraction of that amount was originally the radioactive C$^{14}$; we know, therefore, the starting amount $A$ to use in the formula above. Carbon-14 has a half-life of $5000$ years. By careful measurements we can measure the amount left after $20$ half-lives or so and can therefore “date” organic objects which grew as long as $100{,}000$ years ago. We would like to know, and we think we do know, the life of still older things. Much of our knowledge is based on the measurements of other radioactive isotopes which have different half-lives. If we make measurements with an isotope with a longer half-life, then we are able to measure longer times. Uranium, for example, has an isotope whose half-life is about $10^9$ years, so that if some material was formed with uranium in it $10^9$ years ago, only half the uranium would remain today. When the uranium disintegrates, it changes into lead. Consider a piece of rock which was formed a long time ago in some chemical process. Lead, being of a chemical nature different from uranium, would appear in one part of the rock and uranium would appear in another part of the rock. The uranium and lead would be separate. If we look at that piece of rock today, where there should only be uranium we will now find a certain fraction of uranium and a certain fraction of lead. By comparing these fractions, we can tell what percent of the uranium disappeared and changed into lead. By this method, the age of certain rocks has been determined to be several billion years. An extension of this method, not using particular rocks but looking at the uranium and lead in the oceans and using averages over the earth, has been used to determine (within the past few years) that the age of the earth itself is approximately $4.5$ billion years. It is encouraging that the age of the earth is found to be the same as the age of the meteorites which land on the earth, as determined by the uranium method. It appears that the earth was formed out of rocks floating in space, and that the meteorites are, quite likely, some of that material left over. At some time more than five billion years ago, the universe started. It is now believed that at least our part of the universe had its beginning about ten or twelve billion years ago. We do not know what happened before then. In fact, we may well ask again: Does the question make any sense? Does an earlier time have any meaning? |
|
1 | 5 | Time and Distance | 5 | Units and standards of time | We have implied that it is convenient if we start with some standard unit of time, say a day or a second, and refer all other times to some multiple or fraction of this unit. What shall we take as our basic standard of time? Shall we take the human pulse? If we compare pulses, we find that they seem to vary a lot. On comparing two clocks, one finds they do not vary so much. You might then say, well, let us take a clock. But whose clock? There is a story of a Swiss boy who wanted all of the clocks in his town to ring noon at the same time. So he went around trying to convince everyone of the value of this. Everyone thought it was a marvelous idea so long as all of the other clocks rang noon when his did! It is rather difficult to decide whose clock we should take as a standard. Fortunately, we all share one clock—the earth. For a long time the rotational period of the earth has been taken as the basic standard of time. As measurements have been made more and more precise, however, it has been found that the rotation of the earth is not exactly periodic, when measured in terms of the best clocks. These “best” clocks are those which we have reason to believe are accurate because they agree with each other. We now believe that, for various reasons, some days are longer than others, some days are shorter, and on the average the period of the earth becomes a little longer as the centuries pass. Until very recently we had found nothing much better than the earth’s period, so all clocks have been related to the length of the day, and the second has been defined as $1/86{,}400$ of an average day. Recently we have been gaining experience with some natural oscillators which we now believe would provide a more constant time reference than the earth, and which are also based on a natural phenomenon available to everyone. These are the so-called “atomic clocks.” Their basic internal period is that of an atomic vibration which is very insensitive to the temperature or any other external effects. These clocks keep time to an accuracy of one part in $10^9$ or better. Within the past two years an improved atomic clock which operates on the vibration of the hydrogen atom has been designed and built by Professor Norman Ramsey at Harvard University. He believes that this clock might be $100$ times more accurate still. Measurements now in progress will show whether this is true or not. We may expect that since it has been possible to build clocks much more accurate than astronomical time, there will soon be an agreement among scientists to define the unit of time in terms of one of the atomic clock standards. |
|
1 | 6 | Probability | 1 | Chance and likelihood | “Chance” is a word which is in common use in everyday living. The radio reports speaking of tomorrow’s weather may say: “There is a sixty percent chance of rain.” You might say: “There is a small chance that I shall live to be one hundred years old.” Scientists also use the word chance. A seismologist may be interested in the question: “What is the chance that there will be an earthquake of a certain size in Southern California next year?” A physicist might ask the question: “What is the chance that a particular geiger counter will register twenty counts in the next ten seconds?” A politician or statesman might be interested in the question: “What is the chance that there will be a nuclear war within the next ten years?” You may be interested in the chance that you will learn something from this chapter. By chance, we mean something like a guess. Why do we make guesses? We make guesses when we wish to make a judgment but have incomplete information or uncertain knowledge. We want to make a guess as to what things are, or what things are likely to happen. Often we wish to make a guess because we have to make a decision. For example: Shall I take my raincoat with me tomorrow? For what earth movement should I design a new building? Shall I build myself a fallout shelter? Shall I change my stand in international negotiations? Shall I go to class today? Sometimes we make guesses because we wish, with our limited knowledge, to say as much as we can about some situation. Really, any generalization is in the nature of a guess. Any physical theory is a kind of guesswork. There are good guesses and there are bad guesses. The theory of probability is a system for making better guesses. The language of probability allows us to speak quantitatively about some situation which may be highly variable, but which does have some consistent average behavior. Let us consider the flipping of a coin. If the toss—and the coin—are “honest,” we have no way of knowing what to expect for the outcome of any particular toss. Yet we would feel that in a large number of tosses there should be about equal numbers of heads and tails. We say: “The probability that a toss will land heads is $0.5$.” We speak of probability only for observations that we contemplate being made in the future. By the “probability” of a particular outcome of an observation we mean our estimate for the most likely fraction of a number of repeated observations that will yield that particular outcome. If we imagine repeating an observation—such as looking at a freshly tossed coin—$N$ times, and if we call $N_A$ our estimate of the most likely number of our observations that will give some specified result $A$, say the result “heads,” then by $P(A)$, the probability of observing $A$, we mean \begin{equation} \label{Eq:I:6:1} P(A)=N_A/N. \end{equation} Our definition requires several comments. First of all, we may speak of a probability of something happening only if the occurrence is a possible outcome of some repeatable observation. It is not clear that it would make any sense to ask: “What is the probability that there is a ghost in that house?” You may object that no situation is exactly repeatable. That is right. Every different observation must at least be at a different time or place. All we can say is that the “repeated” observations should, for our intended purposes, appear to be equivalent. We should assume, at least, that each observation was made from an equivalently prepared situation, and especially with the same degree of ignorance at the start. (If we sneak a look at an opponent’s hand in a card game, our estimate of our chances of winning are different than if we do not!) We should emphasize that $N$ and $N_A$ in Eq. (6.1) are not intended to represent numbers based on actual observations. $N_A$ is our best estimate of what would occur in $N$ imagined observations. Probability depends, therefore, on our knowledge and on our ability to make estimates. In effect, on our common sense! Fortunately, there is a certain amount of agreement in the common sense of many things, so that different people will make the same estimate. Probabilities need not, however, be “absolute” numbers. Since they depend on our ignorance, they may become different if our knowledge changes. You may have noticed another rather “subjective” aspect of our definition of probability. We have referred to $N_A$ as “our estimate of the most likely number …” We do not mean that we expect to observe exactly $N_A$, but that we expect a number near $N_A$, and that the number $N_A$ is more likely than any other number in the vicinity. If we toss a coin, say, $30$ times, we should expect that the number of heads would not be very likely to be exactly $15$, but rather only some number near to $15$, say $12$, $13$, $14$, $15$, $16$, or $17$. However, if we must choose, we would decide that $15$ heads is more likely than any other number. We would write $P(\text{heads})=0.5$. Why did we choose $15$ as more likely than any other number? We must have argued with ourselves in the following manner: If the most likely number of heads is $N_H$ in a total number of tosses $N$, then the most likely number of tails $N_T$ is $(N-N_H)$. (We are assuming that every toss gives either heads or tails, and no “other” result!) But if the coin is “honest,” there is no preference for heads or tails. Until we have some reason to think the coin (or toss) is dishonest, we must give equal likelihoods for heads and tails. So we must set $N_T=N_H$. It follows that $N_T=$ $N_H=$ $N/2$, or $P(H)=$ $P(T)=$ $0.5$. We can generalize our reasoning to any situation in which there are $m$ different but “equivalent” (that is, equally likely) possible results of an observation. If an observation can yield $m$ different results, and we have reason to believe that any one of them is as likely as any other, then the probability of a particular outcome $A$ is $P(A)=1/m$. If there are seven different-colored balls in an opaque box and we pick one out “at random” (that is, without looking), the probability of getting a ball of a particular color is $\tfrac{1}{7}$. The probability that a “blind draw” from a shuffled deck of $52$ cards will show the ten of hearts is $\tfrac{1}{52}$. The probability of throwing a double-one with dice is $\tfrac{1}{36}$. In Chapter 5 we described the size of a nucleus in terms of its apparent area, or “cross section.” When we did so we were really talking about probabilities. When we shoot a high-energy particle at a thin slab of material, there is some chance that it will pass right through and some chance that it will hit a nucleus. (Since the nucleus is so small that we cannot see it, we cannot aim right at a nucleus. We must “shoot blind.”) If there are $n$ atoms in our slab and the nucleus of each atom has a cross-sectional area $\sigma$, then the total area “shadowed” by the nuclei is $n\sigma$. In a large number $N$ of random shots, we expect that the number of hits $N_C$ of some nucleus will be in the ratio to $N$ as the shadowed area is to the total area of the slab: \begin{equation} \label{Eq:I:6:2} N_C/N=n\sigma/A. \end{equation} We may say, therefore, that the probability that any one projectile particle will suffer a collision in passing through the slab is \begin{equation} \label{Eq:I:6:3} P_C=\frac{n}{A}\,\sigma, \end{equation} where $n/A$ is the number of atoms per unit area in our slab. |
|
1 | 6 | Probability | 2 | Fluctuations | We would like now to use our ideas about probability to consider in some greater detail the question: “How many heads do I really expect to get if I toss a coin $N$ times?” Before answering the question, however, let us look at what does happen in such an “experiment.” Figure 6–1 shows the results obtained in the first three “runs” of such an experiment in which $N=30$. The sequences of “heads” and “tails” are shown just as they were obtained. The first game gave $11$ heads; the second also $11$; the third $16$. In three trials we did not once get $15$ heads. Should we begin to suspect the coin? Or were we wrong in thinking that the most likely number of “heads” in such a game is $15$? Ninety-seven more runs were made to obtain a total of $100$ experiments of $30$ tosses each. The results of the experiments are given in Table 6–1.1 Looking at the numbers in Table 6–1, we see that most of the results are “near” $15$, in that they are between $12$ and $18$. We can get a better feeling for the details of these results if we plot a graph of the distribution of the results. We count the number of games in which a score of $k$ was obtained, and plot this number for each $k$. Such a graph is shown in Fig. 6–2. A score of $15$ heads was obtained in $13$ games. A score of $14$ heads was also obtained $13$ times. Scores of $16$ and $17$ were each obtained more than $13$ times. Are we to conclude that there is some bias toward heads? Was our “best estimate” not good enough? Should we conclude now that the “most likely” score for a run of $30$ tosses is really $16$ heads? But wait! In all the games taken together, there were $3000$ tosses. And the total number of heads obtained was $1493$. The fraction of tosses that gave heads is $0.498$, very nearly, but slightly less than half. We should certainly not assume that the probability of throwing heads is greater than $0.5$! The fact that one particular set of observations gave $16$ heads most often, is a fluctuation. We still expect that the most likely number of heads is $15$. We may ask the question: “What is the probability that a game of $30$ tosses will yield $15$ heads—or $16$, or any other number?” We have said that in a game of one toss, the probability of obtaining one head is $0.5$, and the probability of obtaining no head is $0.5$. In a game of two tosses there are four possible outcomes: $HH$, $HT$, $TH$, $TT$. Since each of these sequences is equally likely, we conclude that (a) the probability of a score of two heads is $\tfrac{1}{4}$, (b) the probability of a score of one head is $\tfrac{2}{4}$, (c) the probability of a zero score is $\tfrac{1}{4}$. There are two ways of obtaining one head, but only one of obtaining either zero or two heads. Consider now a game of $3$ tosses. The third toss is equally likely to be heads or tails. There is only one way to obtain $3$ heads: we must have obtained $2$ heads on the first two tosses, and then heads on the last. There are, however, three ways of obtaining $2$ heads. We could throw tails after having thrown two heads (one way) or we could throw heads after throwing only one head in the first two tosses (two ways). So for scores of $3$-$H$, $2$-$H$, $1$-$H$, $0$-$H$ we have that the number of equally likely ways is $1$, $3$, $3$, $1$, with a total of $8$ different possible sequences. The probabilities are $\tfrac{1}{8}$, $\tfrac{3}{8}$, $\tfrac{3}{8}$, $\tfrac{1}{8}$. The argument we have been making can be summarized by a diagram like that in Fig. 6–3. It is clear how the diagram should be continued for games with a larger number of tosses. Figure 6–4 shows such a diagram for a game of $6$ tosses. The number of “ways” to any point on the diagram is just the number of different “paths” (sequences of heads and tails) which can be taken from the starting point. The vertical position gives us the total number of heads thrown. The set of numbers which appears in such a diagram is known as Pascal’s triangle. The numbers are also known as the binomial coefficients, because they also appear in the expansion of $(a+b)^n$. If we call $n$ the number of tosses and $k$ the number of heads thrown, then the numbers in the diagram are usually designated by the symbol $\tbinom{n}{k}$. We may remark in passing that the binomial coefficients can also be computed from \begin{equation} \label{Eq:I:6:4} \binom{n}{k}=\frac{n!}{k!(n-k)!}, \end{equation} where $n!$, called “$n$-factorial,” represents the product $(n)(n-1)(n-2)\dotsm(3)(2)(1)$. We are now ready to compute the probability $P(k,n)$ of throwing $k$ heads in $n$ tosses, using our definition Eq. (6.1). The total number of possible sequences is $2^n$ (since there are $2$ outcomes for each toss), and the number of ways of obtaining $k$ heads is $\tbinom{n}{k}$, all equally likely, so we have \begin{equation} \label{Eq:I:6:5} P(k,n)=\frac{\tbinom{n}{k}}{2^n}. \end{equation} Since $P(k,n)$ is the fraction of games which we expect to yield $k$ heads, then in $100$ games we should expect to find $k$ heads $100\cdot P(k,n)$ times. The dashed curve in Fig. 6–2 passes through the points computed from $100\cdot P(k,30)$. We see that we expect to obtain a score of $15$ heads in $14$ or $15$ games, whereas this score was observed in $13$ games. We expect a score of $16$ in $13$ or $14$ games, but we obtained that score in $15$ games. Such fluctuations are “part of the game.” The method we have just used can be applied to the most general situation in which there are only two possible outcomes of a single observation. Let us designate the two outcomes by $W$ (for “win”) and $L$ (for “lose”). In the general case, the probability of $W$ or $L$ in a single event need not be equal. Let $p$ be the probability of obtaining the result $W$. Then $q$, the probability of $L$, is necessarily $(1-p)$. In a set of $n$ trials, the probability $P(k,n)$ that $W$ will be obtained $k$ times is \begin{equation} \label{Eq:I:6:6} P(k,n)=\tbinom{n}{k}p^kq^{n-k}. \end{equation} This probability function is called the Bernoulli or, also, the binomial probability. |
|
1 | 6 | Probability | 3 | The random walk | There is another interesting problem in which the idea of probability is required. It is the problem of the “random walk.” In its simplest version, we imagine a “game” in which a “player” starts at the point $x=0$ and at each “move” is required to take a step either forward (toward $+x$) or backward (toward $-x$). The choice is to be made randomly, determined, for example, by the toss of a coin. How shall we describe the resulting motion? In its general form the problem is related to the motion of atoms (or other particles) in a gas—called Brownian motion—and also to the combination of errors in measurements. You will see that the random-walk problem is closely related to the coin-tossing problem we have already discussed. First, let us look at a few examples of a random walk. We may characterize the walker’s progress by the net distance $D_N$ traveled in $N$ steps. We show in the graph of Fig. 6–5 three examples of the path of a random walker. (We have used for the random sequence of choices the results of the coin tosses shown in Fig. 6–1.) What can we say about such a motion? We might first ask: “How far does he get on the average?” We must expect that his average progress will be zero, since he is equally likely to go either forward or backward. But we have the feeling that as $N$ increases, he is more likely to have strayed farther from the starting point. We might, therefore, ask what is his average distance travelled in absolute value, that is, what is the average of $\abs{D}$. It is, however, more convenient to deal with another measure of “progress,” the square of the distance: $D^2$ is positive for either positive or negative motion, and is therefore a reasonable measure of such random wandering. We can show that the expected value of $D_N^2$ is just $N$, the number of steps taken. By “expected value” we mean the probable value (our best guess), which we can think of as the expected average behavior in many repeated sequences. We represent such an expected value by $\expval{D_N^2}$, and may refer to it also as the “mean square distance.” After one step, $D^2$ is always $+1$, so we have certainly $\expval{D_1^2}=1$. (All distances will be measured in terms of a unit of one step. We shall not continue to write the units of distance.) The expected value of $D_N^2$ for $N>1$ can be obtained from $D_{N-1}$. If, after $(N-1)$ steps, we have $D_{N-1}$, then after $N$ steps we have $D_N=D_{N-1}+1$ or $D_N=D_{N-1}-1$. For the squares, \begin{equation} \label{Eq:I:6:7} D_N^2= \begin{cases} D_{N-1}^2+2D_{N-1}+1,\\[2ex] \kern{3.7em}\textit{or}\\[2ex] D_{N-1}^2-2D_{N-1}+1. \end{cases} \end{equation} In a number of independent sequences, we expect to obtain each value one-half of the time, so our average expectation is just the average of the two possible values. The expected value of $D_N^2$ is then $D_{N-1}^2+1$. In general, we should expect for $D_{N-1}^2$ its “expected value” $\expval{D_{N-1}^2}$ (by definition!). So \begin{equation} \label{Eq:I:6:8} \expval{D_N^2}=\expval{D_{N-1}^2}+1. \end{equation} We have already shown that $\expval{D_1^2}=1$; it follows then that \begin{equation} \label{Eq:I:6:9} \expval{D_N^2}=N, \end{equation} a particularly simple result! If we wish a number like a distance, rather than a distance squared, to represent the “progress made away from the origin” in a random walk, we can use the “root-mean-square distance” $D_{\text{rms}}$: \begin{equation} \label{Eq:I:6:10} D_{\text{rms}}=\sqrt{\expval{D^2}}=\sqrt{N}. \end{equation} We have pointed out that the random walk is closely similar in its mathematics to the coin-tossing game we considered at the beginning of the chapter. If we imagine the direction of each step to be in correspondence with the appearance of heads or tails in a coin toss, then $D$ is just $N_H-N_T$, the difference in the number of heads and tails. Since $N_H+N_T=N$, the total number of steps (and tosses), we have $D=2N_H-N$. We have derived earlier an expression for the expected distribution of $N_H$ (also called $k$) and obtained the result of Eq. (6.5). Since $N$ is just a constant, we have the corresponding distribution for $D$. (Since for every head more than $N/2$ there is a tail “missing,” we have the factor of $2$ between $N_H$ and $D$.) The graph of Fig. 6–2 represents the distribution of distances we might get in $30$ random steps (where $k=15$ is to be read $D=0$; $k=16$, $D=2$; etc.). The variation of $N_H$ from its expected value $N/2$ is \begin{equation} \label{Eq:I:6:11} N_H-\frac{N}{2}=\frac{D}{2}. \end{equation} The rms deviation is \begin{equation} \label{Eq:I:6:12} \biggl(N_H-\frac{N}{2}\biggr)_{\text{rms}}=\tfrac{1}{2}\sqrt{N}. \end{equation} According to our result for $D_{\text{rms}}$, we expect that the “typical” distance in $30$ steps ought to be $\sqrt{30} \approx 5.5$, or a typical $k$ should be about $5.5/2 = 2.75$ units from $15$. We see that the “width” of the curve in Fig. 6–2, measured from the center, is just about $3$ units, in agreement with this result. We are now in a position to consider a question we have avoided until now. How shall we tell whether a coin is “honest” or “loaded”? We can give now at least a partial answer. For an honest coin, we expect the fraction of the times heads appears to be $0.5$, that is, \begin{equation} \label{Eq:I:6:13} \frac{\expval{N_H}}{N}=0.5. \end{equation} We also expect an actual $N_H$ to deviate from $N/2$ by about $\sqrt{N}/2$, or the fraction to deviate by \begin{equation*} \frac{1}{N}\,\frac{\sqrt{N}}{2}=\frac{1}{2\sqrt{N}}. \end{equation*} The larger $N$ is, the closer we expect the fraction $N_H/N$ to be to one-half. In Fig. 6–6 we have plotted the fraction $N_H/N$ for the coin tosses reported earlier in this chapter. We see the tendency for the fraction of heads to approach $0.5$ for large $N$. Unfortunately, for any given run or combination of runs there is no guarantee that the observed deviation will be even near the expected deviation. There is always the finite chance that a large fluctuation—a long string of heads or tails—will give an arbitrarily large deviation. All we can say is that if the deviation is near the expected $1/2\sqrt{N}$ (say within a factor of $2$ or $3$), we have no reason to suspect the honesty of the coin. If it is much larger, we may be suspicious, but cannot prove, that the coin is loaded (or that the tosser is clever!). We have also not considered how we should treat the case of a “coin” or some similar “chancy” object (say a stone that always lands in either of two positions) that we have good reason to believe should have a different probability for heads and tails. We have defined $P(H)=\expval{N_H}/N$. How shall we know what to expect for $N_H$? In some cases, the best we can do is to observe the number of heads obtained in large numbers of tosses. For want of anything better, we must set $\expval{N_H}=N_H(\text{observed})$. (How could we expect anything else?) We must understand, however, that in such a case a different experiment, or a different observer, might conclude that $P(H)$ was different. We would expect, however, that the various answers should agree within the deviation $1/2\sqrt{N}$ [if $P(H)$ is near one-half]. An experimental physicist usually says that an “experimentally determined” probability has an “error,” and writes \begin{equation} \label{Eq:I:6:14} P(H)=\frac{N_H}{N}\pm\frac{1}{2\sqrt{N}}. \end{equation} There is an implication in such an expression that there is a “true” or “correct” probability which could be computed if we knew enough, and that the observation may be in “error” due to a fluctuation. There is, however, no way to make such thinking logically consistent. It is probably better to realize that the probability concept is in a sense subjective, that it is always based on uncertain knowledge, and that its quantitative evaluation is subject to change as we obtain more information. |
|
1 | 6 | Probability | 4 | A probability distribution | Let us return now to the random walk and consider a modification of it. Suppose that in addition to a random choice of the direction ($+$ or $-$) of each step, the length of each step also varied in some unpredictable way, the only condition being that on the average the step length was one unit. This case is more representative of something like the thermal motion of a molecule in a gas. If we call the length of a step $S$, then $S$ may have any value at all, but most often will be “near” $1$. To be specific, we shall let $\expval{S^2}=1$ or, equivalently, $S_{\text{rms}}=1$. Our derivation for $\expval{D^2}$ would proceed as before except that Eq. (6.8) would be changed now to read \begin{equation} \label{Eq:I:6:15} \expval{D_N^2}=\expval{D_{N-1}^2}+\expval{S^2}=\expval{D_{N-1}^2}+1. \end{equation} We have, as before, that \begin{equation} \label{Eq:I:6:16} \expval{D_N^2}=N. \end{equation} What would we expect now for the distribution of distances $D$? What is, for example, the probability that $D=0$ after $30$ steps? The answer is zero! The probability is zero that $D$ will be any particular value, since there is no chance at all that the sum of the backward steps (of varying lengths) would exactly equal the sum of forward steps. We cannot plot a graph like that of Fig. 6–2. We can, however, obtain a representation similar to that of Fig. 6–2, if we ask, not what is the probability of obtaining $D$ exactly equal to $0$, $1$, or $2$, but instead what is the probability of obtaining $D$ near $0$, $1$, or $2$. Let us define $P(x,\Delta x)$ as the probability that $D$ will lie in the interval $\Delta x$ located at $x$ (say from $x$ to $x+\Delta x$). We expect that for small $\Delta x$ the chance of $D$ landing in the interval is proportional to $\Delta x$, the width of the interval. So we can write \begin{equation} \label{Eq:I:6:17} P(x,\Delta x)=p(x)\,\Delta x. \end{equation} The function $p(x)$ is called the probability density. The form of $p(x)$ will depend on $N$, the number of steps taken, and also on the distribution of individual step lengths. We cannot demonstrate the proofs here, but for large $N$, $p(x)$ is the same for all reasonable distributions in individual step lengths, and depends only on $N$. We plot $p(x)$ for three values of $N$ in Fig. 6–7. You will notice that the “half-widths” (typical spread from $x=0$) of these curves is $\sqrt{N}$, as we have shown it should be. You may notice also that the value of $p(x)$ near zero is inversely proportional to $\sqrt{N}$. This comes about because the curves are all of a similar shape and their areas under the curves must all be equal. Since $p(x)\,\Delta x$ is the probability of finding $D$ in $\Delta x$ when $\Delta x$ is small, we can determine the chance of finding $D$ somewhere inside an arbitrary interval from $x_1$ to $x_2$, by cutting the interval in a number of small increments $\Delta x$ and evaluating the sum of the terms $p(x)\,\Delta x$ for each increment. The probability that $D$ lands somewhere between $x_1$ and $x_2$, which we may write $P(x_1 < D < x_2)$, is equal to the shaded area in Fig. 6–8. The smaller we take the increments $\Delta x$, the more correct is our result. We can write, therefore, \begin{equation} \label{Eq:I:6:18} P(x_1 < D < x_2)=\sum p(x)\,\Delta x=\int_{x_1}^{x_2}p(x)\,dx. \end{equation}
\begin{equation} \begin{gathered} P(x_1 < D < x_2)=\sum p(x)\Delta x\\[1ex] =\int_{x_1}^{x_2}p(x)\,dx. \end{gathered} \label{Eq:I:6:18} \end{equation}
The area under the whole curve is the probability that $D$ lands somewhere (that is, has some value between $x=-\infty$ and $x=+\infty$). That probability is surely $1$. We must have that \begin{equation} \label{Eq:I:6:19} \int_{-\infty}^{+\infty}p(x)\,dx=1. \end{equation} Since the curves in Fig. 6–7 get wider in proportion to $\sqrt{N}$, their heights must be proportional to $1/\sqrt{N}$ to maintain the total area equal to $1$. The probability density function we have been describing is one that is encountered most commonly. It is known as the normal or Gaussian probability density. It has the mathematical form \begin{equation} \label{Eq:I:6:20} p(x)=\frac{1}{\sigma\sqrt{2\pi}}\,e^{-x^2/2\sigma^2}, \end{equation} where $\sigma$ is called the standard deviation and is given, in our case, by $\sigma=\sqrt{N}$ or, if the rms step size is different from $1$, by $\sigma=\sqrt{N}S_{\text{rms}}$. We remarked earlier that the motion of a molecule, or of any particle, in a gas is like a random walk. Suppose we open a bottle of an organic compound and let some of its vapor escape into the air. If there are air currents, so that the air is circulating, the currents will also carry the vapor with them. But even in perfectly still air, the vapor will gradually spread out—will diffuse—until it has penetrated throughout the room. We might detect it by its color or odor. The individual molecules of the organic vapor spread out in still air because of the molecular motions caused by collisions with other molecules. If we know the average “step” size, and the number of steps taken per second, we can find the probability that one, or several, molecules will be found at some distance from their starting point after any particular passage of time. As time passes, more steps are taken and the gas spreads out as in the successive curves of Fig. 6–7. In a later chapter, we shall find out how the step sizes and step frequencies are related to the temperature and pressure of a gas. Earlier, we said that the pressure of a gas is due to the molecules bouncing against the walls of the container. When we come later to make a more quantitative description, we will wish to know how fast the molecules are going when they bounce, since the impact they make will depend on that speed. We cannot, however, speak of the speed of the molecules. It is necessary to use a probability description. A molecule may have any speed, but some speeds are more likely than others. We describe what is going on by saying that the probability that any particular molecule will have a speed between $v$ and $v+\Delta v$ is $p(v)\,\Delta v$, where $p(v)$, a probability density, is a given function of the speed $v$. We shall see later how Maxwell, using common sense and the ideas of probability, was able to find a mathematical expression for $p(v)$. The form2 of the function $p(v)$ is shown in Fig. 6–9. Velocities may have any value, but are most likely to be near the most probable value $v_p$. We often think of the curve of Fig. 6–9 in a somewhat different way. If we consider the molecules in a typical container (with a volume of, say, one liter), then there are a very large number $N$ of molecules present ($N\approx10^{22}$). Since $p(v)\,\Delta v$ is the probability that one molecule will have its velocity in $\Delta v$, by our definition of probability we mean that the expected number $\expval{\Delta N}$ to be found with a velocity in the interval $\Delta v$ is given by \begin{equation} \label{Eq:I:6:21} \expval{\Delta N}=N\,p(v)\,\Delta v. \end{equation} We call $N\,p(v)$ the “distribution in velocity.” The area under the curve between two velocities $v_1$ and $v_2$, for example the shaded area in Fig. 6–9, represents [for the curve $N\,p(v)$] the expected number of molecules with velocities between $v_1$ and $v_2$. Since with a gas we are usually dealing with large numbers of molecules, we expect the deviations from the expected numbers to be small (like $1/\sqrt{N}$), so we often neglect to say the “expected” number, and say instead: “The number of molecules with velocities between $v_1$ and $v_2$ is the area under the curve.” We should remember, however, that such statements are always about probable numbers. |
|
1 | 6 | Probability | 5 | The uncertainty principle | The ideas of probability are certainly useful in describing the behavior of the $10^{22}$ or so molecules in a sample of a gas, for it is clearly impractical even to attempt to write down the position or velocity of each molecule. When probability was first applied to such problems, it was considered to be a convenience—a way of dealing with very complex situations. We now believe that the ideas of probability are essential to a description of atomic happenings. According to quantum mechanics, the mathematical theory of particles, there is always some uncertainty in the specification of positions and velocities. We can, at best, say that there is a certain probability that any particle will have a position near some coordinate $x$. We can give a probability density $p_1(x)$, such that $p_1(x)\,\Delta x$ is the probability that the particle will be found between $x$ and $x+\Delta x$. If the particle is reasonably well localized, say near $x_0$, the function $p_1(x)$ might be given by the graph of Fig. 6–10(a). Similarly, we must specify the velocity of the particle by means of a probability density $p_2(v)$, with $p_2(v)\,\Delta v$ the probability that the velocity will be found between $v$ and $v+\Delta v$. It is one of the fundamental results of quantum mechanics that the two functions $p_1(x)$ and $p_2(v)$ cannot be chosen independently and, in particular, cannot both be made arbitrarily narrow. If we call the typical “width” of the $p_1(x)$ curve $[\Delta x]$, and that of the $p_2(v)$ curve $[\Delta v]$ (as shown in the figure), nature demands that the product of the two widths be at least as big as the number $\hbar/2m$, where $m$ is the mass of the particle. We may write this basic relationship as \begin{equation} \label{Eq:I:6:22} [\Delta x]\cdot[\Delta v]\geq\hbar/2m. \end{equation} This equation is a statement of the Heisenberg uncertainty principle that we mentioned earlier. Since the right-hand side of Eq. (6.22) is a constant, this equation says that if we try to “pin down” a particle by forcing it to be at a particular place, it ends up by having a high speed. Or if we try to force it to go very slowly, or at a precise velocity, it “spreads out” so that we do not know very well just where it is. Particles behave in a funny way! The uncertainty principle describes an inherent fuzziness that must exist in any attempt to describe nature. Our most precise description of nature must be in terms of probabilities. There are some people who do not like this way of describing nature. They feel somehow that if they could only tell what is really going on with a particle, they could know its speed and position simultaneously. In the early days of the development of quantum mechanics, Einstein was quite worried about this problem. He used to shake his head and say, “But, surely God does not throw dice in determining how electrons should go!” He worried about that problem for a long time and he probably never really reconciled himself to the fact that this is the best description of nature that one can give. There are still one or two physicists who are working on the problem who have an intuitive conviction that it is possible somehow to describe the world in a different way and that all of this uncertainty about the way things are can be removed. No one has yet been successful. The necessary uncertainty in our specification of the position of a particle becomes most important when we wish to describe the structure of atoms. In the hydrogen atom, which has a nucleus of one proton with one electron outside of the nucleus, the uncertainty in the position of the electron is as large as the atom itself! We cannot, therefore, properly speak of the electron moving in some “orbit” around the proton. The most we can say is that there is a certain chance $p(r)\,\Delta V$, of observing the electron in an element of volume $\Delta V$ at the distance $r$ from the proton. The probability density $p(r)$ is given by quantum mechanics. For an undisturbed hydrogen atom $p(r)=Ae^{-2r/a}$. The number $a$ is the “typical” radius, where the function is decreasing rapidly. Since there is a small probability of finding the electron at distances from the nucleus much greater than $a$, we may think of $a$ as “the radius of the atom,” about $10^{-10}$ meter. We can form an image of the hydrogen atom by imagining a “cloud” whose density is proportional to the probability density for observing the electron. A sample of such a cloud is shown in Fig. 6–11. Thus our best “picture” of a hydrogen atom is a nucleus surrounded by an “electron cloud” (although we really mean a “probability cloud”). The electron is there somewhere, but nature permits us to know only the chance of finding it at any particular place. In its efforts to learn as much as possible about nature, modern physics has found that certain things can never be “known” with certainty. Much of our knowledge must always remain uncertain. The most we can know is in terms of probabilities. |
|
1 | 7 | The Theory of Gravitation | 1 | Planetary motions | In this chapter we shall discuss one of the most far-reaching generalizations of the human mind. While we are admiring the human mind, we should take some time off to stand in awe of a nature that could follow with such completeness and generality such an elegantly simple principle as the law of gravitation. What is this law of gravitation? It is that every object in the universe attracts every other object with a force which for any two bodies is proportional to the mass of each and varies inversely as the square of the distance between them. This statement can be expressed mathematically by the equation \begin{equation*} F=G\,\frac{mm'}{r^2}. \end{equation*} If to this we add the fact that an object responds to a force by accelerating in the direction of the force by an amount that is inversely proportional to the mass of the object, we shall have said everything required, for a sufficiently talented mathematician could then deduce all the consequences of these two principles. However, since you are not assumed to be sufficiently talented yet, we shall discuss the consequences in more detail, and not just leave you with only these two bare principles. We shall briefly relate the story of the discovery of the law of gravitation and discuss some of its consequences, its effects on history, the mysteries that such a law entails, and some refinements of the law made by Einstein; we shall also discuss the relationships of the law to the other laws of physics. All this cannot be done in one chapter, but these subjects will be treated in due time in subsequent chapters. The story begins with the ancients observing the motions of planets among the stars, and finally deducing that they went around the sun, a fact that was rediscovered later by Copernicus. Exactly how the planets went around the sun, with exactly what motion, took a little more work to discover. Beginning in the sixteenth century there were great debates as to whether they really went around the sun or not. Tycho Brahe had an idea that was different from anything proposed by the ancients: his idea was that these debates about the nature of the motions of the planets would best be resolved if the actual positions of the planets in the sky were measured sufficiently accurately. If measurement showed exactly how the planets moved, then perhaps it would be possible to establish one or another viewpoint. This was a tremendous idea—that to find something out, it is better to perform some careful experiments than to carry on deep philosophical arguments. Pursuing this idea, Tycho Brahe studied the positions of the planets for many years in his observatory on the island of Hven, near Copenhagen. He made voluminous tables, which were then studied by the mathematician Kepler, after Tycho’s death. Kepler discovered from the data some very beautiful and remarkable, but simple, laws regarding planetary motion. |
|
1 | 7 | The Theory of Gravitation | 2 | Kepler’s laws | First of all, Kepler found that each planet goes around the sun in a curve called an ellipse, with the sun at a focus of the ellipse. An ellipse is not just an oval, but is a very specific and precise curve that can be obtained by using two tacks, one at each focus, a loop of string, and a pencil; more mathematically, it is the locus of all points the sum of whose distances from two fixed points (the foci) is a constant. Or, if you will, it is a foreshortened circle (Fig. 7–1). Kepler’s second observation was that the planets do not go around the sun at a uniform speed, but move faster when they are nearer the sun and more slowly when they are farther from the sun, in precisely this way: Suppose a planet is observed at any two successive times, let us say a week apart, and that the radius vector1 is drawn to the planet for each observed position. The orbital arc traversed by the planet during the week, and the two radius vectors, bound a certain plane area, the shaded area shown in Fig. 7–2. If two similar observations are made a week apart, at a part of the orbit farther from the sun (where the planet moves more slowly), the similarly bounded area is exactly the same as in the first case. So, in accordance with the second law, the orbital speed of each planet is such that the radius “sweeps out” equal areas in equal times. Finally, a third law was discovered by Kepler much later; this law is of a different category from the other two, because it deals not with only a single planet, but relates one planet to another. This law says that when the orbital period and orbit size of any two planets are compared, the periods are proportional to the $3/2$ power of the orbit size. In this statement the period is the time interval it takes a planet to go completely around its orbit, and the size is measured by the length of the greatest diameter of the elliptical orbit, technically known as the major axis. More simply, if the planets went in circles, as they nearly do, the time required to go around the circle would be proportional to the $3/2$ power of the diameter (or radius). Thus Kepler’s three laws are:
Each planet moves around the sun in an ellipse, with the sun at one focus.
The radius vector from the sun to the planet sweeps out equal areas in equal intervals of time.
The squares of the periods of any two planets are proportional to the cubes of the semimajor axes of their respective orbits: $T\propto a^{3/2}$.
|
|
1 | 7 | The Theory of Gravitation | 3 | Development of dynamics | While Kepler was discovering these laws, Galileo was studying the laws of motion. The problem was, what makes the planets go around? (In those days, one of the theories proposed was that the planets went around because behind them were invisible angels, beating their wings and driving the planets forward. You will see that this theory is now modified! It turns out that in order to keep the planets going around, the invisible angels must fly in a different direction and they have no wings. Otherwise, it is a somewhat similar theory!) Galileo discovered a very remarkable fact about motion, which was essential for understanding these laws. That is the principle of inertia—if something is moving, with nothing touching it and completely undisturbed, it will go on forever, coasting at a uniform speed in a straight line. (Why does it keep on coasting? We do not know, but that is the way it is.) Newton modified this idea, saying that the only way to change the motion of a body is to use force. If the body speeds up, a force has been applied in the direction of motion. On the other hand, if its motion is changed to a new direction, a force has been applied sideways. Newton thus added the idea that a force is needed to change the speed or the direction of motion of a body. For example, if a stone is attached to a string and is whirling around in a circle, it takes a force to keep it in the circle. We have to pull on the string. In fact, the law is that the acceleration produced by the force is inversely proportional to the mass, or the force is proportional to the mass times the acceleration. The more massive a thing is, the stronger the force required to produce a given acceleration. (The mass can be measured by putting other stones on the end of the same string and making them go around the same circle at the same speed. In this way it is found that more or less force is required, the more massive object requiring more force.) The brilliant idea resulting from these considerations is that no tangential force is needed to keep a planet in its orbit (the angels do not have to fly tangentially) because the planet would coast in that direction anyway. If there were nothing at all to disturb it, the planet would go off in a straight line. But the actual motion deviates from the line on which the body would have gone if there were no force, the deviation being essentially at right angles to the motion, not in the direction of the motion. In other words, because of the principle of inertia, the force needed to control the motion of a planet around the sun is not a force around the sun but toward the sun. (If there is a force toward the sun, the sun might be the angel, of course!) |
|
1 | 7 | The Theory of Gravitation | 4 | Newton’s law of gravitation | From his better understanding of the theory of motion, Newton appreciated that the sun could be the seat or organization of forces that govern the motion of the planets. Newton proved to himself (and perhaps we shall be able to prove it soon) that the very fact that equal areas are swept out in equal times is a precise sign post of the proposition that all deviations are precisely radial—that the law of areas is a direct consequence of the idea that all of the forces are directed exactly toward the sun. Next, by analyzing Kepler’s third law it is possible to show that the farther away the planet, the weaker the forces. If two planets at different distances from the sun are compared, the analysis shows that the forces are inversely proportional to the squares of the respective distances. With the combination of the two laws, Newton concluded that there must be a force, inversely as the square of the distance, directed in a line between the two objects. Being a man of considerable feeling for generalities, Newton supposed, of course, that this relationship applied more generally than just to the sun holding the planets. It was already known, for example, that the planet Jupiter had moons going around it as the moon of the earth goes around the earth, and Newton felt certain that each planet held its moons with a force. He already knew of the force holding us on the earth, so he proposed that this was a universal force—that everything pulls everything else. The next problem was whether the pull of the earth on its people was the “same” as its pull on the moon, i.e., inversely as the square of the distance. If an object on the surface of the earth falls $16$ feet in the first second after it is released from rest, how far does the moon fall in the same time? We might say that the moon does not fall at all. But if there were no force on the moon, it would go off in a straight line, whereas it goes in a circle instead, so it really falls in from where it would have been if there were no force at all. We can calculate from the radius of the moon’s orbit (which is about $240{,}000$ miles) and how long it takes to go around the earth (approximately $29$ days), how far the moon moves in its orbit in $1$ second, and can then calculate how far it falls in one second.2 This distance turns out to be roughly $1/20$ of an inch in a second. That fits very well with the inverse square law, because the earth’s radius is $4000$ miles, and if something which is $4000$ miles from the center of the earth falls $16$ feet in a second, something $240{,}000$ miles, or $60$ times as far away, should fall only $1/3600$ of $16$ feet, which also is roughly $1/20$ of an inch. Wishing to put this theory of gravitation to a test by similar calculations, Newton made his calculations very carefully and found a discrepancy so large that he regarded the theory as contradicted by facts, and did not publish his results. Six years later a new measurement of the size of the earth showed that the astronomers had been using an incorrect distance to the moon. When Newton heard of this, he made the calculation again, with the corrected figures, and obtained beautiful agreement. This idea that the moon “falls” is somewhat confusing, because, as you see, it does not come any closer. The idea is sufficiently interesting to merit further explanation: the moon falls in the sense that it falls away from the straight line that it would pursue if there were no forces. Let us take an example on the surface of the earth. An object released near the earth’s surface will fall $16$ feet in the first second. An object shot out horizontally will also fall $16$ feet; even though it is moving horizontally, it still falls the same $16$ feet in the same time. Figure 7–3 shows an apparatus which demonstrates this. On the horizontal track is a ball which is going to be driven forward a little distance away. At the same height is a ball which is going to fall vertically, and there is an electrical switch arranged so that at the moment the first ball leaves the track, the second ball is released. That they come to the same depth at the same time is witnessed by the fact that they collide in midair. An object like a bullet, shot horizontally, might go a long way in one second—perhaps $2000$ feet—but it will still fall $16$ feet if it is aimed horizontally. What happens if we shoot a bullet faster and faster? Do not forget that the earth’s surface is curved. If we shoot it fast enough, then when it falls $16$ feet it may be at just the same height above the ground as it was before. How can that be? It still falls, but the earth curves away, so it falls “around” the earth. The question is, how far does it have to go in one second so that the earth is $16$ feet below the horizon? In Fig. 7–4 we see the earth with its $4000$-mile radius, and the tangential, straight line path that the bullet would take if there were no force. Now, if we use one of those wonderful theorems in geometry, which says that our tangent is the mean proportional between the two parts of the diameter cut by an equal chord, we see that the horizontal distance travelled is the mean proportional between the $16$ feet fallen and the $8000$-mile diameter of the earth. The square root of $(16/5280)\times8000$ comes out very close to $5$ miles. Thus we see that if the bullet moves at $5$ miles a second, it then will continue to fall toward the earth at the same rate of $16$ feet each second, but will never get any closer because the earth keeps curving away from it. Thus it was that Mr. Gagarin maintained himself in space while going $25{,}000$ miles around the earth at approximately $5$ miles per second. (He took a little longer because he was a little higher.) Any great discovery of a new law is useful only if we can take more out than we put in. Now, Newton used the second and third of Kepler’s laws to deduce his law of gravitation. What did he predict? First, his analysis of the moon’s motion was a prediction because it connected the falling of objects on the earth’s surface with that of the moon. Second, the question is, is the orbit an ellipse? We shall see in a later chapter how it is possible to calculate the motion exactly, and indeed one can prove that it should be an ellipse,3 so no extra fact is needed to explain Kepler’s first law. Thus Newton made his first powerful prediction. The law of gravitation explains many phenomena not previously understood. For example, the pull of the moon on the earth causes the tides, hitherto mysterious. The moon pulls the water up under it and makes the tides—people had thought of that before, but they were not as clever as Newton, and so they thought there ought to be only one tide during the day. The reasoning was that the moon pulls the water up under it, making a high tide and a low tide, and since the earth spins underneath, that makes the tide at one station go up and down every $24$ hours. Actually the tide goes up and down in $12$ hours. Another school of thought claimed that the high tide should be on the other side of the earth because, so they argued, the moon pulls the earth away from the water! Both of these theories are wrong. It actually works like this: the pull of the moon for the earth and for the water is “balanced” at the center. But the water which is closer to the moon is pulled more than the average and the water which is farther away from it is pulled less than the average. Furthermore, the water can flow while the more rigid earth cannot. The true picture is a combination of these two things. What do we mean by “balanced”? What balances? If the moon pulls the whole earth toward it, why doesn’t the earth fall right “up” to the moon? Because the earth does the same trick as the moon, it goes in a circle around a point which is inside the earth but not at its center. The moon does not just go around the earth, the earth and the moon both go around a central position, each falling toward this common position, as shown in Fig. 7–5. This motion around the common center is what balances the fall of each. So the earth is not going in a straight line either; it travels in a circle. The water on the far side is “unbalanced” because the moon’s attraction there is weaker than it is at the center of the earth, where it just balances the “centrifugal force.” The result of this imbalance is that the water rises up, away from the center of the earth. On the near side, the attraction from the moon is stronger, and the imbalance is in the opposite direction in space, but again away from the center of the earth. The net result is that we get two tidal bulges. |
|
1 | 7 | The Theory of Gravitation | 5 | Universal gravitation | What else can we understand when we understand gravity? Everyone knows the earth is round. Why is the earth round? That is easy; it is due to gravitation. The earth can be understood to be round merely because everything attracts everything else and so it has attracted itself together as far as it can! If we go even further, the earth is not exactly a sphere because it is rotating, and this brings in centrifugal effects which tend to oppose gravity near the equator. It turns out that the earth should be elliptical, and we even get the right shape for the ellipse. We can thus deduce that the sun, the moon, and the earth should be (nearly) spheres, just from the law of gravitation. What else can you do with the law of gravitation? If we look at the moons of Jupiter we can understand everything about the way they move around that planet. Incidentally, there was once a certain difficulty with the moons of Jupiter that is worth remarking on. These satellites were studied very carefully by Rømer, who noticed that the moons sometimes seemed to be ahead of schedule, and sometimes behind. (One can find their schedules by waiting a very long time and finding out how long it takes on the average for the moons to go around.) Now they were ahead when Jupiter was particularly close to the earth and they were behind when Jupiter was farther from the earth. This would have been a very difficult thing to explain according to the law of gravitation—it would have been, in fact, the death of this wonderful theory if there were no other explanation. If a law does not work even in one place where it ought to, it is just wrong. But the reason for this discrepancy was very simple and beautiful: it takes a little while to see the moons of Jupiter because of the time it takes light to travel from Jupiter to the earth. When Jupiter is closer to the earth the time is a little less, and when it is farther from the earth, the time is more. This is why moons appear to be, on the average, a little ahead or a little behind, depending on whether they are closer to or farther from the earth. This phenomenon showed that light does not travel instantaneously, and furnished the first estimate of the speed of light. This was done in 1676. If all of the planets pull on each other, the force which controls, let us say, Jupiter in going around the sun is not just the force from the sun; there is also a pull from, say, Saturn. This force is not really strong, since the sun is much more massive than Saturn, but there is some pull, so the orbit of Jupiter should not be a perfect ellipse, and it is not; it is slightly off, and “wobbles” around the correct elliptical orbit. Such a motion is a little more complicated. Attempts were made to analyze the motions of Jupiter, Saturn, and Uranus on the basis of the law of gravitation. The effects of each of these planets on each other were calculated to see whether or not the tiny deviations and irregularities in these motions could be completely understood from this one law. Lo and behold, for Jupiter and Saturn, all was well, but Uranus was “weird.” It behaved in a very peculiar manner. It was not travelling in an exact ellipse, but that was understandable, because of the attractions of Jupiter and Saturn. But even if allowance were made for these attractions, Uranus still was not going right, so the laws of gravitation were in danger of being overturned, a possibility that could not be ruled out. Two men, Adams and Le Verrier, in England and France, independently, arrived at another possibility: perhaps there is another planet, dark and invisible, which men had not seen. This planet, $N$, could pull on Uranus. They calculated where such a planet would have to be in order to cause the observed perturbations. They sent messages to the respective observatories, saying, “Gentlemen, point your telescope to such and such a place, and you will see a new planet.” It often depends on with whom you are working as to whether they pay any attention to you or not. They did pay attention to Le Verrier; they looked, and there planet $N$ was! The other observatory then also looked very quickly in the next few days and saw it too. This discovery shows that Newton’s laws are absolutely right in the solar system; but do they extend beyond the relatively small distances of the nearest planets? The first test lies in the question, do stars attract each other as well as planets? We have definite evidence that they do in the double stars. Figure 7–6 shows a double star—two stars very close together (there is also a third star in the picture so that we will know that the photograph was not turned). The stars are also shown as they appeared several years later. We see that, relative to the “fixed” star, the axis of the pair has rotated, i.e., the two stars are going around each other. Do they rotate according to Newton’s laws? Careful measurements of the relative positions of one such double star system are shown in Fig. 7–7. There we see a beautiful ellipse, the measures starting in 1862 and going all the way around to 1904 (by now it must have gone around once more). Everything coincides with Newton’s laws, except that the star Sirius A is not at the focus. Why should that be? Because the plane of the ellipse is not in the “plane of the sky.” We are not looking at right angles to the orbit plane, and when an ellipse is viewed at a tilt, it remains an ellipse but the focus is no longer at the same place. Thus we can analyze double stars, moving about each other, according to the requirements of the gravitational law. That the law of gravitation is true at even bigger distances is indicated in Fig. 7–8. If one cannot see gravitation acting here, he has no soul. This figure shows one of the most beautiful things in the sky—a globular star cluster. All of the dots are stars. Although they look as if they are packed solid toward the center, that is due to the fallibility of our instruments. Actually, the distances between even the centermost stars are very great and they very rarely collide. There are more stars in the interior than farther out, and as we move outward there are fewer and fewer. It is obvious that there is an attraction among these stars. It is clear that gravitation exists at these enormous dimensions, perhaps $100{,}000$ times the size of the solar system. Let us now go further, and look at an entire galaxy, shown in Fig. 7–9. The shape of this galaxy indicates an obvious tendency for its matter to agglomerate. Of course we cannot prove that the law here is precisely inverse square, only that there is still an attraction, at this enormous dimension, that holds the whole thing together. One may say, “Well, that is all very clever but why is it not just a ball?” Because it is spinning and has angular momentum which it cannot give up as it contracts; it must contract mostly in a plane. (Incidentally, if you are looking for a good problem, the exact details of how the arms are formed and what determines the shapes of these galaxies has not been worked out.) It is, however, clear that the shape of the galaxy is due to gravitation even though the complexities of its structure have not yet allowed us to analyze it completely. In a galaxy we have a scale of perhaps $50{,}000$ to $100{,}000$ light years. The earth’s distance from the sun is $8\tfrac{1}{3}$ light minutes, so you can see how large these dimensions are. Gravity appears to exist at even bigger dimensions, as indicated by Fig. 7–10, which shows many “little” things clustered together. This is a cluster of galaxies, just like a star cluster. Thus galaxies attract each other at such distances that they too are agglomerated into clusters. Perhaps gravitation exists even over distances of tens of millions of light years; so far as we now know, gravity seems to go out forever inversely as the square of the distance. Not only can we understand the nebulae, but from the law of gravitation we can even get some ideas about the origin of the stars. If we have a big cloud of dust and gas, as indicated in Fig. 7–11, the gravitational attractions of the pieces of dust for one another might make them form little lumps. Barely visible in the figure are “little” black spots which may be the beginning of the accumulations of dust and gases which, due to their gravitation, begin to form stars. Whether we have ever seen a star form or not is still debatable. Figure 7–12 shows the one piece of evidence which suggests that we have. At the left is a picture of a region of gas with some stars in it taken in 1947, and at the right is another picture, taken only $7$ years later, which shows two new bright spots. Has gas accumulated, has gravity acted hard enough and collected it into a ball big enough that the stellar nuclear reaction starts in the interior and turns it into a star? Perhaps, and perhaps not. It is unreasonable that in only seven years we should be so lucky as to see a star change itself into visible form; it is much less probable that we should see two! |
|
1 | 7 | The Theory of Gravitation | 6 | Cavendish’s experiment | Gravitation, therefore, extends over enormous distances. But if there is a force between any pair of objects, we ought to be able to measure the force between our own objects. Instead of having to watch the stars go around each other, why can we not take a ball of lead and a marble and watch the marble go toward the ball of lead? The difficulty of this experiment when done in such a simple manner is the very weakness or delicacy of the force. It must be done with extreme care, which means covering the apparatus to keep the air out, making sure it is not electrically charged, and so on; then the force can be measured. It was first measured by Cavendish with an apparatus which is schematically indicated in Fig. 7–13. This first demonstrated the direct force between two large, fixed balls of lead and two smaller balls of lead on the ends of an arm supported by a very fine fiber, called a torsion fiber. By measuring how much the fiber gets twisted, one can measure the strength of the force, verify that it is inversely proportional to the square of the distance, and determine how strong it is. Thus, one may accurately determine the coefficient $G$ in the formula \begin{equation*} F=G\,\frac{mm'}{r^2}. \end{equation*} All the masses and distances are known. You say, “We knew it already for the earth.” Yes, but we did not know the mass of the earth. By knowing $G$ from this experiment and by knowing how strongly the earth attracts, we can indirectly learn how great is the mass of the earth! This experiment has been called “weighing the earth” by some people, and it can be used to determine the coefficient $G$ of the gravity law. This is the only way in which the mass of the earth can be determined. $G$ turns out to be \begin{equation*} 6.670\times10^{-11}\text{ newton}\cdot\text{m}^2/\text{kg}^2. \end{equation*} It is hard to exaggerate the importance of the effect on the history of science produced by this great success of the theory of gravitation. Compare the confusion, the lack of confidence, the incomplete knowledge that prevailed in the earlier ages, when there were endless debates and paradoxes, with the clarity and simplicity of this law—this fact that all the moons and planets and stars have such a simple rule to govern them, and further that man could understand it and deduce how the planets should move! This is the reason for the success of the sciences in following years, for it gave hope that the other phenomena of the world might also have such beautifully simple laws. |
|
1 | 7 | The Theory of Gravitation | 7 | What is gravity? | But is this such a simple law? What about the machinery of it? All we have done is to describe how the earth moves around the sun, but we have not said what makes it go. Newton made no hypotheses about this; he was satisfied to find what it did without getting into the machinery of it. No one has since given any machinery. It is characteristic of the physical laws that they have this abstract character. The law of conservation of energy is a theorem concerning quantities that have to be calculated and added together, with no mention of the machinery, and likewise the great laws of mechanics are quantitative mathematical laws for which no machinery is available. Why can we use mathematics to describe nature without a mechanism behind it? No one knows. We have to keep going because we find out more that way. Many mechanisms for gravitation have been suggested. It is interesting to consider one of these, which many people have thought of from time to time. At first, one is quite excited and happy when he “discovers” it, but he soon finds that it is not correct. It was first discovered about 1750. Suppose there were many particles moving in space at a very high speed in all directions and being only slightly absorbed in going through matter. When they are absorbed, they give an impulse to the earth. However, since there are as many going one way as another, the impulses all balance. But when the sun is nearby, the particles coming toward the earth through the sun are partially absorbed, so fewer of them are coming from the sun than are coming from the other side. Therefore, the earth feels a net impulse toward the sun and it does not take one long to see that it is inversely as the square of the distance—because of the variation of the solid angle that the sun subtends as we vary the distance. What is wrong with that machinery? It involves some new consequences which are not true. This particular idea has the following trouble: the earth, in moving around the sun, would impinge on more particles which are coming from its forward side than from its hind side (when you run in the rain, the rain in your face is stronger than that on the back of your head!). Therefore there would be more impulse given the earth from the front, and the earth would feel a resistance to motion and would be slowing up in its orbit. One can calculate how long it would take for the earth to stop as a result of this resistance, and it would not take long enough for the earth to still be in its orbit, so this mechanism does not work. No machinery has ever been invented that “explains” gravity without also predicting some other phenomenon that does not exist. Next we shall discuss the possible relation of gravitation to other forces. There is no explanation of gravitation in terms of other forces at the present time. It is not an aspect of electricity or anything like that, so we have no explanation. However, gravitation and other forces are very similar, and it is interesting to note analogies. For example, the force of electricity between two charged objects looks just like the law of gravitation: the force of electricity is a constant, with a minus sign, times the product of the charges, and varies inversely as the square of the distance. It is in the opposite direction—likes repel. But is it still not very remarkable that the two laws involve the same function of distance? Perhaps gravitation and electricity are much more closely related than we think. Many attempts have been made to unify them; the so-called unified field theory is only a very elegant attempt to combine electricity and gravitation; but, in comparing gravitation and electricity, the most interesting thing is the relative strengths of the forces. Any theory that contains them both must also deduce how strong the gravity is. If we take, in some natural units, the repulsion of two electrons (nature’s universal charge) due to electricity, and the attraction of two electrons due to their masses, we can measure the ratio of electrical repulsion to the gravitational attraction. The ratio is independent of the distance and is a fundamental constant of nature. The ratio is shown in Fig. 7–14. The gravitational attraction relative to the electrical repulsion between two electrons is $1$ divided by $4.17\times10^{42}$! The question is, where does such a large number come from? It is not accidental, like the ratio of the volume of the earth to the volume of a flea. We have considered two natural aspects of the same thing, an electron. This fantastic number is a natural constant, so it involves something deep in nature. Where could such a tremendous number come from? Some say that we shall one day find the “universal equation,” and in it, one of the roots will be this number. It is very difficult to find an equation for which such a fantastic number is a natural root. Other possibilities have been thought of; one is to relate it to the age of the universe. Clearly, we have to find another large number somewhere. But do we mean the age of the universe in years? No, because years are not “natural”; they were devised by men. As an example of something natural, let us consider the time it takes light to go across a proton, $10^{-24}$ second. If we compare this time with the age of the universe, $2\times10^{10}$ years, the answer is $10^{-42}$. It has about the same number of zeros going off it, so it has been proposed that the gravitational constant is related to the age of the universe. If that were the case, the gravitational constant would change with time, because as the universe got older the ratio of the age of the universe to the time which it takes for light to go across a proton would be gradually increasing. Is it possible that the gravitational constant is changing with time? Of course the changes would be so small that it is quite difficult to be sure. One test which we can think of is to determine what would have been the effect of the change during the past $10^9$ years, which is approximately the age from the earliest life on the earth to now, and one-tenth of the age of the universe. In this time, the gravity constant would have increased by about $10$ percent. It turns out that if we consider the structure of the sun—the balance between the weight of its material and the rate at which radiant energy is generated inside it—we can deduce that if the gravity were $10$ percent stronger, the sun would be much more than $10$ percent brighter—by the sixth power of the gravity constant! If we calculate what happens to the orbit of the earth when the gravity is changing, we find that the earth was then closer in. Altogether, the earth would be about $100$ degrees centigrade hotter, and all of the water would not have been in the sea, but vapor in the air, so life would not have started in the sea. So we do not now believe that the gravity constant is changing with the age of the universe. But such arguments as the one we have just given are not very convincing, and the subject is not completely closed. It is a fact that the force of gravitation is proportional to the mass, the quantity which is fundamentally a measure of inertia—of how hard it is to hold something which is going around in a circle. Therefore two objects, one heavy and one light, going around a larger object in the same circle at the same speed because of gravity, will stay together because to go in a circle requires a force which is stronger for a bigger mass. That is, the gravity is stronger for a given mass in just the right proportion so that the two objects will go around together. If one object were inside the other it would stay inside; it is a perfect balance. Therefore, Gagarin or Titov would find things “weightless” inside a space ship; if they happened to let go of a piece of chalk, for example, it would go around the earth in exactly the same way as the whole space ship, and so it would appear to remain suspended before them in space. It is very interesting that this force is exactly proportional to the mass with great precision, because if it were not exactly proportional there would be some effect by which inertia and weight would differ. The absence of such an effect has been checked with great accuracy by an experiment done first by Eötvös in 1909 and more recently by Dicke. For all substances tried, the masses and weights are exactly proportional within $1$ part in $1{,}000{,}000{,}000$, or less. This is a remarkable experiment. |
|
1 | 8 | Motion | 1 | Description of motion | In order to find the laws governing the various changes that take place in bodies as time goes on, we must be able to describe the changes and have some way to record them. The simplest change to observe in a body is the apparent change in its position with time, which we call motion. Let us consider some solid object with a permanent mark, which we shall call a point, that we can observe. We shall discuss the motion of the little marker, which might be the radiator cap of an automobile or the center of a falling ball, and shall try to describe the fact that it moves and how it moves. These examples may sound trivial, but many subtleties enter into the description of change. Some changes are more difficult to describe than the motion of a point on a solid object, for example the speed of drift of a cloud that is drifting very slowly, but rapidly forming or evaporating, or the change of a woman’s mind. We do not know a simple way to analyze a change of mind, but since the cloud can be represented or described by many molecules, perhaps we can describe the motion of the cloud in principle by describing the motion of all its individual molecules. Likewise, perhaps even the changes in the mind may have a parallel in changes of the atoms inside the brain, but we have no such knowledge yet. At any rate, that is why we begin with the motion of points; perhaps we should think of them as atoms, but it is probably better to be more rough in the beginning and simply to think of some kind of small objects—small, that is, compared with the distance moved. For instance, in describing the motion of a car that is going a hundred miles, we do not have to distinguish between the front and the back of the car. To be sure, there are slight differences, but for rough purposes we say “the car,” and likewise it does not matter that our points are not absolute points; for our present purposes it is not necessary to be extremely precise. Also, while we take a first look at this subject we are going to forget about the three dimensions of the world. We shall just concentrate on moving in one direction, as in a car on one road. We shall return to three dimensions after we see how to describe motion in one dimension. Now, you may say, “This is all some kind of trivia,” and indeed it is. How can we describe such a one-dimensional motion—let us say, of a car? Nothing could be simpler. Among many possible ways, one would be the following. To determine the position of the car at different times, we measure its distance from the starting point and record all the observations. In Table 8–1, $s$ represents the distance of the car, in feet, from the starting point, and $t$ represents the time in minutes. The first line in the table represents zero distance and zero time—the car has not started yet. After one minute it has started and has gone $1200$ feet. Then in two minutes, it goes farther—notice that it picked up more distance in the second minute—it has accelerated; but something happened between $3$ and $4$ and even more so at $5$—it stopped at a light perhaps? Then it speeds up again and goes $13{,}000$ feet by the end of $6$ minutes, $18{,}000$ feet at the end of $7$ minutes, and $23{,}500$ feet in $8$ minutes; at $9$ minutes it has advanced to only $24{,}000$ feet, because in the last minute it was stopped by a cop. That is one way to describe the motion. Another way is by means of a graph. If we plot the time horizontally and the distance vertically, we obtain a curve something like that shown in Fig. 8–1. As the time increases, the distance increases, at first very slowly and then more rapidly, and very slowly again for a little while at $4$ minutes; then it increases again for a few minutes and finally, at $9$ minutes, appears to have stopped increasing. These observations can be made from the graph, without a table. Obviously, for a complete description one would have to know where the car is at the half-minute marks, too, but we suppose that the graph means something, that the car has some position at all the intermediate times. The motion of a car is complicated. For another example we take something that moves in a simpler manner, following more simple laws: a falling ball. Table 8–2 gives the time in seconds and the distance in feet for a falling body. At zero seconds the ball starts out at zero feet, and at the end of $1$ second it has fallen $16$ feet. At the end of $2$ seconds, it has fallen $64$ feet, at the end of $3$ seconds, $144$ feet, and so on; if the tabulated numbers are plotted, we get the nice parabolic curve shown in Fig. 8–2. The formula for this curve can be written as \begin{equation} \label{Eq:I:8:1} s=16t^2. \end{equation} This formula enables us to calculate the distances at any time. You might say there ought to be a formula for the first graph too. Actually, one may write such a formula abstractly, as \begin{equation} \label{Eq:I:8:2} s=f(t), \end{equation} meaning that $s$ is some quantity depending on $t$ or, in mathematical phraseology, $s$ is a function of $t$. Since we do not know what the function is, there is no way we can write it in definite algebraic form. We have now seen two examples of motion, adequately described with very simple ideas, no subtleties. However, there are subtleties—several of them. In the first place, what do we mean by time and space? It turns out that these deep philosophical questions have to be analyzed very carefully in physics, and this is not so easy to do. The theory of relativity shows that our ideas of space and time are not as simple as one might think at first sight. However, for our present purposes, for the accuracy that we need at first, we need not be very careful about defining things precisely. Perhaps you say, “That’s a terrible thing—I learned that in science we have to define everything precisely.” We cannot define anything precisely! If we attempt to, we get into that paralysis of thought that comes to philosophers, who sit opposite each other, one saying to the other, “You don’t know what you are talking about!” The second one says, “What do you mean by know? What do you mean by talking? What do you mean by you?,” and so on. In order to be able to talk constructively, we just have to agree that we are talking about roughly the same thing. You know as much about time as we need for the present, but remember that there are some subtleties that have to be discussed; we shall discuss them later. Another subtlety involved, and already mentioned, is that it should be possible to imagine that the moving point we are observing is always located somewhere. (Of course when we are looking at it, there it is, but maybe when we look away it isn’t there.) It turns out that in the motion of atoms, that idea also is false—we cannot find a marker on an atom and watch it move. That subtlety we shall have to get around in quantum mechanics. But we are first going to learn what the problems are before introducing the complications, and then we shall be in a better position to make corrections, in the light of the more recent knowledge of the subject. We shall, therefore, take a simple point of view about time and space. We know what these concepts are in a rough way, and those who have driven a car know what speed means. |
|
1 | 8 | Motion | 2 | Speed | Even though we know roughly what “speed” means, there are still some rather deep subtleties; consider that the learned Greeks were never able to adequately describe problems involving velocity. The subtlety comes when we try to comprehend exactly what is meant by “speed.” The Greeks got very confused about this, and a new branch of mathematics had to be discovered beyond the geometry and algebra of the Greeks, Arabs, and Babylonians. As an illustration of the difficulty, try to solve this problem by sheer algebra: A balloon is being inflated so that the volume of the balloon is increasing at the rate of $100$ cm³ per second; at what speed is the radius increasing when the volume is $1000$ cm³? The Greeks were somewhat confused by such problems, being helped, of course, by some very confusing Greeks. To show that there were difficulties in reasoning about speed at the time, Zeno produced a large number of paradoxes, of which we shall mention one to illustrate his point that there are obvious difficulties in thinking about motion. “Listen,” he says, “to the following argument: Achilles runs $10$ times as fast as a tortoise, nevertheless he can never catch the tortoise. For, suppose that they start in a race where the tortoise is $100$ meters ahead of Achilles; then when Achilles has run the $100$ meters to the place where the tortoise was, the tortoise has proceeded $10$ meters, having run one-tenth as fast. Now, Achilles has to run another $10$ meters to catch up with the tortoise, but on arriving at the end of that run, he finds that the tortoise is still $1$ meter ahead of him; running another meter, he finds the tortoise $10$ centimeters ahead, and so on, ad infinitum. Therefore, at any moment the tortoise is always ahead of Achilles and Achilles can never catch up with the tortoise.” What is wrong with that? It is that a finite amount of time can be divided into an infinite number of pieces, just as a length of line can be divided into an infinite number of pieces by dividing repeatedly by two. And so, although there are an infinite number of steps (in the argument) to the point at which Achilles reaches the tortoise, it doesn’t mean that there is an infinite amount of time. We can see from this example that there are indeed some subtleties in reasoning about speed. In order to get to the subtleties in a clearer fashion, we remind you of a joke which you surely must have heard. At the point where the lady in the car is caught by a cop, the cop comes up to her and says, “Lady, you were going $60$ miles an hour!” She says, “That’s impossible, sir, I was travelling for only seven minutes. It is ridiculous—how can I go $60$ miles an hour when I wasn’t going an hour?” How would you answer her if you were the cop? Of course, if you were really the cop, then no subtleties are involved; it is very simple: you say, “Tell that to the judge!” But let us suppose that we do not have that escape and we make a more honest, intellectual attack on the problem, and try to explain to this lady what we mean by the idea that she was going $60$ miles an hour. Just what do we mean? We say, “What we mean, lady, is this: if you kept on going the same way as you are going now, in the next hour you would go $60$ miles.” She could say, “Well, my foot was off the accelerator and the car was slowing down, so if I kept on going that way it would not go $60$ miles.” Or consider the falling ball and suppose we want to know its speed at the time three seconds if the ball kept on going the way it is going. What does that mean—kept on accelerating, going faster? No—kept on going with the same velocity. But that is what we are trying to define! For if the ball keeps on going the way it is going, it will just keep on going the way it is going. Thus we need to define the velocity better. What has to be kept the same? The lady can also argue this way: “If I kept on going the way I’m going for one more hour, I would run into that wall at the end of the street!” It is not so easy to say what we mean. Many physicists think that measurement is the only definition of anything. Obviously, then, we should use the instrument that measures the speed—the speedometer—and say, “Look, lady, your speedometer reads $60$.” So she says, “My speedometer is broken and didn’t read at all.” Does that mean the car is standing still? We believe that there is something to measure before we build the speedometer. Only then can we say, for example, “The speedometer isn’t working right,” or “the speedometer is broken.” That would be a meaningless sentence if the velocity had no meaning independent of the speedometer. So we have in our minds, obviously, an idea that is independent of the speedometer, and the speedometer is meant only to measure this idea. So let us see if we can get a better definition of the idea. We say, “Yes, of course, before you went an hour, you would hit that wall, but if you went one second, you would go $88$ feet; lady, you were going $88$ feet per second, and if you kept on going, the next second it would be $88$ feet, and the wall down there is farther away than that.” She says, “Yes, but there’s no law against going $88$ feet per second! There is only a law against going $60$ miles an hour.” “But,” we reply, “it’s the same thing.” If it is the same thing, it should not be necessary to go into this circumlocution about $88$ feet per second. In fact, the falling ball could not keep going the same way even one second because it would be changing speed, and we shall have to define speed somehow. Now we seem to be getting on the right track; it goes something like this: If the lady kept on going for another $1/1000$ of an hour, she would go $1/1000$ of $60$ miles. In other words, she does not have to keep on going for the whole hour; the point is that for a moment she is going at that speed. Now what that means is that if she went just a little bit more in time, the extra distance she goes would be the same as that of a car that goes at a steady speed of $60$ miles an hour. Perhaps the idea of the $88$ feet per second is right; we see how far she went in the last second, divide by $88$ feet, and if it comes out $1$ the speed was $60$ miles an hour. In other words, we can find the speed in this way: We ask, how far do we go in a very short time? We divide that distance by the time, and that gives the speed. But the time should be made as short as possible, the shorter the better, because some change could take place during that time. If we take the time of a falling body as an hour, the idea is ridiculous. If we take it as a second, the result is pretty good for a car, because there is not much change in speed, but not for a falling body; so in order to get the speed more and more accurately, we should take a smaller and smaller time interval. What we should do is take a millionth of a second, find out how far the car has gone, and divide that distance by a millionth of a second. The result gives the distance per second, which is what we mean by the velocity, so we can define it that way. That is a successful answer for the lady, or rather, that is the definition that we are going to use. The foregoing definition involves a new idea, an idea that was not available to the Greeks in a general form. That idea was to take an infinitesimal distance and the corresponding infinitesimal time, form the ratio, and watch what happens to that ratio as the time that we use gets smaller and smaller and smaller. In other words, take a limit of the distance travelled divided by the time required, as the time taken gets smaller and smaller, ad infinitum. This idea was invented by Newton and by Leibniz, independently, and is the beginning of a new branch of mathematics, called the differential calculus. Calculus was invented in order to describe motion, and its first application was to the problem of defining what is meant by going “$60$ miles an hour.” Let us try to define velocity a little better. Suppose that in a short time, $\epsilon$, the car or other body goes a short distance $x$; then the velocity, $v$, is defined as \begin{equation*} v=x/\epsilon, \end{equation*} an approximation that becomes better and better as the $\epsilon$ is taken smaller and smaller. If a mathematical expression is desired, we can say that the velocity equals the limit as the $\epsilon$ is made to go smaller and smaller in the expression $x/\epsilon$, or \begin{equation} \label{Eq:I:8:3} v=\lim_{\epsilon\to0}\frac{x}{\epsilon}. \end{equation} We cannot do the same thing with the lady in the car, because the table is incomplete. We know only where she was at intervals of one minute; we can get a rough idea that she was going $5000$ ft/min during the $7$th minute, but we do not know, at exactly the moment $7$ minutes, whether she had been speeding up and the speed was $4900$ ft/min at the beginning of the $6$th minute, and is now $5100$ ft/min, or something else, because we do not have the exact details in between. So only if the table were completed with an infinite number of entries could we really calculate the velocity from such a table. On the other hand, when we have a complete mathematical formula, as in the case of a falling body (Eq. 8.1), then it is possible to calculate the velocity, because we can calculate the position at any time whatsoever. Let us take as an example the problem of determining the velocity of the falling ball at the particular time $5$ seconds. One way to do this is to see from Table 8–2 what it did in the $5$th second; it went $400-256=144$ ft, so it is going $144$ ft/sec; however, that is wrong, because the speed is changing; on the average it is $144$ ft/sec during this interval, but the ball is speeding up and is really going faster than $144$ ft/sec. We want to find out exactly how fast. The technique involved in this process is the following: We know where the ball was at $5$ sec. At $5.1$ sec, the distance that it has gone all together is $16(5.1)^2=416.16$ ft (see Eq. 8.1). At $5$ sec it had already fallen $400$ ft; in the last tenth of a second it fell $416.16-400=16.16$ ft. Since $16.16$ ft in $0.1$ sec is the same as $161.6$ ft/sec, that is the speed more or less, but it is not exactly correct. Is that the speed at $5$, or at $5.1$, or halfway between at $5.05$ sec, or when is that the speed? Never mind—the problem was to find the speed at $5$ seconds, and we do not have exactly that; we have to do a better job. So, we take one-thousandth of a second more than $5$ sec, or $5.001$ sec, and calculate the total fall as \begin{equation*} s=16(5.001)^2=16(25.010001)=400.160016\text{ ft}. \end{equation*}
\begin{gather*} s=16(5.001)^2=16(25.010001)\\[.5ex] =400.160016\text{ ft}. \end{gather*} In the last $0.001$ sec the ball fell $0.160016$ ft, and if we divide this number by $0.001$ sec we obtain the speed as $160.016$ ft/sec. That is closer, very close, but it is still not exact. It should now be evident what we must do to find the speed exactly. To perform the mathematics we state the problem a little more abstractly: to find the velocity at a special time, $t_0$, which in the original problem was $5$ sec. Now the distance at $t_0$, which we call $s_0$, is $16t_0^2$, or $400$ ft in this case. In order to find the velocity, we ask, “At the time $t_0+(\text{a little bit})$, or $t_0+\epsilon$, where is the body?” The new position is $16(t_0+\epsilon)^2=16t_0^2+32t_0\epsilon+16\epsilon^2$. So it is farther along than it was before, because before it was only $16t_0^2$. This distance we shall call $s_0+(\text{a little bit more})$, or $s_0+x$ (if $x$ is the extra bit). Now if we subtract the distance at $t_0$ from the distance at $t_0+\epsilon$, we get $x$, the extra distance gone, as $x=32t_0\cdot\epsilon+16\epsilon^2$. Our first approximation to the velocity is \begin{equation} \label{Eq:I:8:4} v=\frac{x}{\epsilon}=32t_0+16\epsilon. \end{equation} The true velocity is the value of this ratio, $x/\epsilon$, when $\epsilon$ becomes vanishingly small. In other words, after forming the ratio, we take the limit as $\epsilon$ gets smaller and smaller, that is, approaches $0$. The equation reduces to, \begin{equation*} v\,(\text{at time $t_0$})=32t_0. \end{equation*} In our problem, $t_0=5$ sec, so the solution is $v=$ $32\times5=$ $160$ ft/sec. A few lines above, where we took $\epsilon$ as $0.1$ and $0.001$ sec successively, the value we got for $v$ was a little more than this, but now we see that the actual velocity is precisely $160$ ft/sec. |
|
1 | 8 | Motion | 3 | Speed as a derivative | The procedure we have just carried out is performed so often in mathematics that for convenience special notations have been assigned to our quantities $\epsilon$ and $x$. In this notation, the $\epsilon$ used above becomes $\Delta t$ and $x$ becomes $\Delta s$. This $\Delta t$ means “an extra bit of $t$,” and carries an implication that it can be made smaller. The prefix $\Delta$ is not a multiplier, any more than $\sin\theta$ means $\text{s}\cdot\text{i}\cdot\text{n}\cdot\theta$—it simply defines a time increment, and reminds us of its special character. $\Delta s$ has an analogous meaning for the distance $s$. Since $\Delta$ is not a factor, it cannot be cancelled in the ratio $\Delta s/\Delta t$ to give $s/t$, any more than the ratio $\sin\theta/\sin2\theta$ can be reduced to $1/2$ by cancellation. In this notation, velocity is equal to the limit of $\Delta s/\Delta t$ when $\Delta t$ gets smaller, or \begin{equation} \label{Eq:I:8:5} v=\lim_{\Delta t\to0}\frac{\Delta s}{\Delta t}. \end{equation} This is really the same as our previous expression (8.3) with $\epsilon$ and $x$, but it has the advantage of showing that something is changing, and it keeps track of what is changing. Incidentally, to a good approximation we have another law, which says that the change in distance of a moving point is the velocity times the time interval, or $\Delta s=v\,\Delta t$. This statement is true only if the velocity is not changing during that time interval, and this condition is true only in the limit as $\Delta t$ goes to $0$. Physicists like to write it $ds=v\,dt$, because by $dt$ they mean $\Delta t$ in circumstances in which it is very small; with this understanding, the expression is valid to a close approximation. If $\Delta t$ is too long, the velocity might change during the interval, and the approximation would become less accurate. For a time $dt$, approaching zero, $ds=v\,dt$ precisely. In this notation we can write (8.5) as \begin{equation*} v=\lim_{\Delta t\to0}\frac{\Delta s}{\Delta t}=\ddt{s}{t}. \end{equation*} The quantity $ds/dt$ which we found above is called the “derivative of $s$ with respect to $t$” (this language helps to keep track of what was changed), and the complicated process of finding it is called finding a derivative, or differentiating. The $ds$’s and $dt$’s which appear separately are called differentials. To familiarize you with the words, we say we found the derivative of the function $16t^2$, or the derivative (with respect to $t$) of $16t^2$ is $32t$. When we get used to the words, the ideas are more easily understood. For practice, let us find the derivative of a more complicated function. We shall consider the formula $s=At^3+Bt+C$, which might describe the motion of a point. The letters $A$, $B$, and $C$ represent constant numbers, as in the familiar general form of a quadratic equation. Starting from the formula for the motion, we wish to find the velocity at any time. To find the velocity in the more elegant manner, we change $t$ to $t+\Delta t$ and note that $s$ is then changed to $s+\text{some } \Delta s$; then we find the $\Delta s$ in terms of $\Delta t$. That is to say, \begin{align*} s+\Delta s&=A(t+\Delta t)^3+B(t+\Delta t)+C\\[1ex] &=At^3+Bt+C+3At^2\,\Delta t+B\,\Delta t+3At(\Delta t)^2+ A(\Delta t)^3, \end{align*}
\begin{align*} s+\Delta s&=A(t+\Delta t)^3+B(t+\Delta t)+C\\[1ex] &=At^3+Bt+C+3At^2\,\Delta t+B\,\Delta t\\[1ex] &\phantom{= At^3~}+3At(\Delta t)^2+ A(\Delta t)^3, \end{align*} but since \begin{equation*} s=At^3+Bt+C, \end{equation*} we find that \begin{equation*} \Delta s=3At^2\,\Delta t+B\,\Delta t+3At(\Delta t)^2+A(\Delta t)^3. \end{equation*} But we do not want $\Delta s$—we want $\Delta s$ divided by $\Delta t$. We divide the preceding equation by $\Delta t$, getting \begin{equation*} \frac{\Delta s}{\Delta t}=3At^2+B+3At(\Delta t)+A(\Delta t)^2. \end{equation*} As $\Delta t$ goes toward $0$ the limit of $\Delta s/\Delta t$ is $ds/dt$ and is equal to \begin{equation*} \ddt{s}{t}=3At^2+B. \end{equation*} This is the fundamental process of calculus, differentiating functions. The process is even more simple than it appears. Observe that when these expansions contain any term with a square or a cube or any higher power of $\Delta t$, such terms may be dropped at once, since they will go to $0$ when the limit is taken. After a little practice the process gets easier because one knows what to leave out. There are many rules or formulas for differentiating various types of functions. These can be memorized, or can be found in tables. A short list is found in Table 8–3. $s$, $u$, $v$, $w$ are arbitrary functions of $t$; $a$, $b$, $c$, and $n$ are arbitrary constants |
|
1 | 8 | Motion | 4 | Distance as an integral | Now we have to discuss the inverse problem. Suppose that instead of a table of distances, we have a table of speeds at different times, starting from zero. For the falling ball, such speeds and times are shown in Table 8–4. A similar table could be constructed for the velocity of the car, by recording the speedometer reading every minute or half-minute. If we know how fast the car is going at any time, can we determine how far it goes? This problem is just the inverse of the one solved above; we are given the velocity and asked to find the distance. How can we find the distance if we know the speed? If the speed of the car is not constant, and the lady goes sixty miles an hour for a moment, then slows down, speeds up, and so on, how can we determine how far she has gone? That is easy. We use the same idea, and express the distance in terms of infinitesimals. Let us say, “In the first second her speed was such and such, and from the formula $\Delta s=v\,\Delta t$ we can calculate how far the car went the first second at that speed.” Now in the next second her speed is nearly the same, but slightly different; we can calculate how far she went in the next second by taking the new speed times the time. We proceed similarly for each second, to the end of the run. We now have a number of little distances, and the total distance will be the sum of all these little pieces. That is, the distance will be the sum of the velocities times the times, or $s=\sum v\,\Delta t$, where the Greek letter $\sum$ (sigma) is used to denote addition. To be more precise, it is the sum of the velocity at a certain time, let us say the $i$-th time, multiplied by $\Delta t$. \begin{equation} \label{Eq:I:8:6} s=\sum_iv(t_i)\,\Delta t. \end{equation} The rule for the times is that $t_{i+1}=t_i+\Delta t$. However, the distance we obtain by this method will not be correct, because the velocity changes during the time interval $\Delta t$. If we take the times short enough, the sum is precise, so we take them smaller and smaller until we obtain the desired accuracy. The true $s$ is \begin{equation} \label{Eq:I:8:7} s=\lim_{\Delta t\to0}\sum_iv(t_i)\,\Delta t. \end{equation} The mathematicians have invented a symbol for this limit, analogous to the symbol for the differential. The $\Delta$ turns into a $d$ to remind us that the time is as small as it can be; the velocity is then called $v$ at the time $t$, and the addition is written as a sum with a great “$s$,” $\int$ (from the Latin summa), which has become distorted and is now unfortunately just called an integral sign. Thus we write \begin{equation} \label{Eq:I:8:8} s=\int v(t)\,dt. \end{equation} This process of adding all these terms together is called integration, and it is the opposite process to differentiation. The derivative of this integral is $v$, so one operator ($d$) undoes the other ($\int$). One can get formulas for integrals by taking the formulas for derivatives and running them backwards, because they are related to each other inversely. Thus one can work out his own table of integrals by differentiating all sorts of functions. For every formula with a differential, we get an integral formula if we turn it around. Every function can be differentiated analytically, i.e., the process can be carried out algebraically, and leads to a definite function. But it is not possible in a simple manner to write an analytical value for any integral at will. You can calculate it, for instance, by doing the above sum, and then doing it again with a finer interval $\Delta t$ and again with a finer interval until you have it nearly right. In general, given some particular function, it is not possible to find, analytically, what the integral is. One may always try to find a function which, when differentiated, gives some desired function; but one may not find it, and it may not exist, in the sense of being expressible in terms of functions that have already been given names. |
|
1 | 8 | Motion | 5 | Acceleration | The next step in developing the equations of motion is to introduce another idea which goes beyond the concept of velocity to that of change of velocity, and we now ask, “How does the velocity change?” In previous chapters we have discussed cases in which forces produce changes in velocity. You may have heard with great excitement about some car that can get from rest to $60$ miles an hour in ten seconds flat. From such a performance we can see how fast the speed changes, but only on the average. What we shall now discuss is the next level of complexity, which is how fast the velocity is changing. In other words, by how many feet per second does the velocity change in a second, that is, how many feet per second, per second? We previously derived the formula for the velocity of a falling body as $v=32t$, which is charted in Table 8–4, and now we want to find out how much the velocity changes per second; this quantity is called the acceleration. Acceleration is defined as the time rate of change of velocity. From the preceding discussion we know enough already to write the acceleration as the derivative $dv/dt$, in the same way that the velocity is the derivative of the distance. If we now differentiate the formula $v=32t$ we obtain, for a falling body, \begin{equation} \label{Eq:I:8:9} a=\ddt{v}{t}=32. \end{equation} [To differentiate the term $32t$ we can utilize the result obtained in a previous problem, where we found that the derivative of $Bt$ is simply $B$ (a constant). So by letting $B=32$, we have at once that the derivative of $32t$ is $32$.] This means that the velocity of a falling body is changing by $32$ feet per second, per second always. We also see from Table 8–4 that the velocity increases by $32$ ft/sec in each second. This is a very simple case, for accelerations are usually not constant. The reason the acceleration is constant here is that the force on the falling body is constant, and Newton’s law says that the acceleration is proportional to the force. As a further example, let us find the acceleration in the problem we have already solved for the velocity. Starting with \begin{equation*} s=At^3+Bt+C \end{equation*} we obtained, for $v=ds/dt$, \begin{equation*} v=3At^2+B. \end{equation*} Since acceleration is the derivative of the velocity with respect to the time, we need to differentiate the last expression above. Recall the rule that the derivative of the two terms on the right equals the sum of the derivatives of the individual terms. To differentiate the first of these terms, instead of going through the fundamental process again we note that we have already differentiated a quadratic term when we differentiated $16t^2$, and the effect was to double the numerical coefficient and change the $t^2$ to $t$; let us assume that the same thing will happen this time, and you can check the result yourself. The derivative of $3At^2$ will then be $6At$. Next we differentiate $B$, a constant term; but by a rule stated previously, the derivative of $B$ is zero; hence this term contributes nothing to the acceleration. The final result, therefore, is $a=$ $dv/dt=$ $6At$. For reference, we state two very useful formulas, which can be obtained by integration. If a body starts from rest and moves with a constant acceleration, $g$, its velocity $v$ at any time $t$ is given by \begin{equation*} v=gt. \end{equation*} The distance it covers in the same time is \begin{equation*} s=\tfrac{1}{2}gt^2. \end{equation*} Various mathematical notations are used in writing derivatives. Since velocity is $ds/dt$ and acceleration is the time derivative of the velocity, we can also write \begin{equation} \label{Eq:I:8:10} a=\ddt{}{t}\biggl(\ddt{s}{t}\biggr)=\frac{d^2s}{dt^2}, \end{equation} which are common ways of writing a second derivative. We have another law that the velocity is equal to the integral of the acceleration. This is just the opposite of $a=dv/dt$; we have already seen that distance is the integral of the velocity, so distance can be found by twice integrating the acceleration. In the foregoing discussion the motion was in only one dimension, and space permits only a brief discussion of motion in three dimensions. Consider a particle $P$ which moves in three dimensions in any manner whatsoever. At the beginning of this chapter, we opened our discussion of the one-dimensional case of a moving car by observing the distance of the car from its starting point at various times. We then discussed velocity in terms of changes of these distances with time, and acceleration in terms of changes in velocity. We can treat three-dimensional motion analogously. It will be simpler to illustrate the motion on a two-dimensional diagram, and then extend the ideas to three dimensions. We establish a pair of axes at right angles to each other, and determine the position of the particle at any moment by measuring how far it is from each of the two axes. Thus each position is given in terms of an $x$-distance and a $y$-distance, and the motion can be described by constructing a table in which both these distances are given as functions of time. (Extension of this process to three dimensions requires only another axis, at right angles to the first two, and measuring a third distance, the $z$-distance. The distances are now measured from coordinate planes instead of lines.) Having constructed a table with $x$- and $y$-distances, how can we determine the velocity? We first find the components of velocity in each direction. The horizontal part of the velocity, or $x$-component, is the derivative of the $x$-distance with respect to the time, or \begin{equation} \label{Eq:I:8:11} v_x =dx/dt. \end{equation} Similarly, the vertical part of the velocity, or $y$-component, is \begin{equation} \label{Eq:I:8:12} v_y = dy/dt. \end{equation} In the third dimension, \begin{equation} \label{Eq:I:8:13} v_z = dz/dt. \end{equation} Now, given the components of velocity, how can we find the velocity along the actual path of motion? In the two-dimensional case, consider two successive positions of the particle, separated by a short distance $\Delta s$ and a short time interval $t_2-t_1=\Delta t$. In the time $\Delta t$ the particle moves horizontally a distance $\Delta x\approx v_x\,\Delta t$, and vertically a distance $\Delta y\approx v_y\,\Delta t$. (The symbol “$\approx$” is read “is approximately.”) The actual distance moved is approximately \begin{equation} \label{Eq:I:8:14} \Delta s\approx\sqrt{(\Delta x)^2+(\Delta y)^2}, \end{equation} as shown in Fig. 8–3. The approximate velocity during this interval can be obtained by dividing by $\Delta t$ and by letting $\Delta t$ go to $0$, as at the beginning of the chapter. We then get the velocity as \begin{equation} \label{Eq:I:8:15} v=\ddt{s}{t}=\sqrt{(dx/dt)^2+(dy/dt)^2}=\sqrt{v_x^2+v_y^2}. \end{equation}
\begin{align} v=\ddt{s}{t}&=\sqrt{(dx/dt)^2+(dy/dt)^2}\notag\\[.5ex] \label{Eq:I:8:15} &=\sqrt{v_x^2+v_y^2}. \end{align} For three dimensions the result is \begin{equation} \label{Eq:I:8:16} v=\sqrt{v_x^2+v_y^2+v_z^2}. \end{equation} In the same way as we defined velocities, we can define accelerations: we have an $x$-component of acceleration $a_x$, which is the derivative of $v_x$, the $x$-component of the velocity (that is, $a_x=d^2x/dt^2$, the second derivative of $x$ with respect to $t$), and so on. Let us consider one nice example of compound motion in a plane. We shall take a motion in which a ball moves horizontally with a constant velocity $u$, and at the same time goes vertically downward with a constant acceleration $-g$; what is the motion? We can say $dx/dt=$ $v_x=$ $u$. Since the velocity $v_x$ is constant, \begin{equation} \label{Eq:I:8:17} x=ut, \end{equation} and since the downward acceleration $-g$ is constant, the distance $y$ the object falls can be written as \begin{equation} \label{Eq:I:8:18} y=-\tfrac{1}{2}gt^2. \end{equation} What is the curve of its path, i.e., what is the relation between $y$ and $x$? We can eliminate $t$ from Eq. (8.18), since $t=x/u$. When we make this substitution we find that \begin{equation} \label{Eq:I:8:19} y=-\frac{g}{2u^2}\,x^2. \end{equation} This relation between $y$ and $x$ may be considered as the equation of the path of the moving ball. When this equation is plotted we obtain a curve that is called a parabola; any freely falling body that is shot out in any direction will travel in a parabola, as shown in Fig. 8–4. |
|
1 | 9 | Newton’s Laws of Dynamics | 1 | Momentum and force | The discovery of the laws of dynamics, or the laws of motion, was a dramatic moment in the history of science. Before Newton’s time, the motions of things like the planets were a mystery, but after Newton there was complete understanding. Even the slight deviations from Kepler’s laws, due to the perturbations of the planets, were computable. The motions of pendulums, oscillators with springs and weights in them, and so on, could all be analyzed completely after Newton’s laws were enunciated. So it is with this chapter: before this chapter we could not calculate how a mass on a spring would move; much less could we calculate the perturbations on the planet Uranus due to Jupiter and Saturn. After this chapter we will be able to compute not only the motion of the oscillating mass, but also the perturbations on the planet Uranus produced by Jupiter and Saturn! Galileo made a great advance in the understanding of motion when he discovered the principle of inertia: if an object is left alone, is not disturbed, it continues to move with a constant velocity in a straight line if it was originally moving, or it continues to stand still if it was just standing still. Of course this never appears to be the case in nature, for if we slide a block across a table it stops, but that is because it is not left to itself—it is rubbing against the table. It required a certain imagination to find the right rule, and that imagination was supplied by Galileo. Of course, the next thing which is needed is a rule for finding how an object changes its speed if something is affecting it. That is, the contribution of Newton. Newton wrote down three laws: The First Law was a mere restatement of the Galilean principle of inertia just described. The Second Law gave a specific way of determining how the velocity changes under different influences called forces. The Third Law describes the forces to some extent, and we shall discuss that at another time. Here we shall discuss only the Second Law, which asserts that the motion of an object is changed by forces in this way: the time-rate-of-change of a quantity called momentum is proportional to the force. We shall state this mathematically shortly, but let us first explain the idea. Momentum is not the same as velocity. A lot of words are used in physics, and they all have precise meanings in physics, although they may not have such precise meanings in everyday language. Momentum is an example, and we must define it precisely. If we exert a certain push with our arms on an object that is light, it moves easily; if we push just as hard on another object that is much heavier in the usual sense, then it moves much less rapidly. Actually, we must change the words from “light” and “heavy” to less massive and more massive, because there is a difference to be understood between the weight of an object and its inertia. (How hard it is to get it going is one thing, and how much it weighs is something else.) Weight and inertia are proportional, and on the earth’s surface are often taken to be numerically equal, which causes a certain confusion to the student. On Mars, weights would be different but the amount of force needed to overcome inertia would be the same. We use the term mass as a quantitative measure of inertia, and we may measure mass, for example, by swinging an object in a circle at a certain speed and measuring how much force we need to keep it in the circle. In this way we find a certain quantity of mass for every object. Now the momentum of an object is a product of two parts: its mass and its velocity. Thus Newton’s Second Law may be written mathematically this way: \begin{equation} \label{Eq:I:9:1} F=\ddt{}{t}(mv). \end{equation} Now there are several points to be considered. In writing down any law such as this, we use many intuitive ideas, implications, and assumptions which are at first combined approximately into our “law.” Later we may have to come back and study in greater detail exactly what each term means, but if we try to do this too soon we shall get confused. Thus at the beginning we take several things for granted. First, that the mass of an object is constant; it isn’t really, but we shall start out with the Newtonian approximation that mass is constant, the same all the time, and that, further, when we put two objects together, their masses add. These ideas were of course implied by Newton when he wrote his equation, for otherwise it is meaningless. For example, suppose the mass varied inversely as the velocity; then the momentum would never change in any circumstance, so the law means nothing unless you know how the mass changes with velocity. At first we say, it does not change. Then there are some implications concerning force. As a rough approximation we think of force as a kind of push or pull that we make with our muscles, but we can define it more accurately now that we have this law of motion. The most important thing to realize is that this relationship involves not only changes in the magnitude of the momentum or of the velocity but also in their direction. If the mass is constant, then Eq. (9.1) can also be written as \begin{equation} \label{Eq:I:9:2} F=m\,\ddt{v}{t}=ma. \end{equation} The acceleration $a$ is the rate of change of the velocity, and Newton’s Second Law says more than that the effect of a given force varies inversely as the mass; it says also that the direction of the change in the velocity and the direction of the force are the same. Thus we must understand that a change in a velocity, or an acceleration, has a wider meaning than in common language: The velocity of a moving object can change by its speeding up, slowing down (when it slows down, we say it accelerates with a negative acceleration), or changing its direction of motion. An acceleration at right angles to the velocity was discussed in Chapter 7. There we saw that an object moving in a circle of radius $R$ with a certain speed $v$ along the circle falls away from a straightline path by a distance equal to $\tfrac{1}{2}(v^2/R)t^2$ if $t$ is very small. Thus the formula for acceleration at right angles to the motion is \begin{equation} \label{Eq:I:9:3} a=v^2/R, \end{equation} and a force at right angles to the velocity will cause an object to move in a curved path whose radius of curvature can be found by dividing the force by the mass to get the acceleration, and then using (9.3). |
|
1 | 9 | Newton’s Laws of Dynamics | 2 | Speed and velocity | In order to make our language more precise, we shall make one further definition in our use of the words speed and velocity. Ordinarily we think of speed and velocity as being the same, and in ordinary language they are the same. But in physics we have taken advantage of the fact that there are two words and have chosen to use them to distinguish two ideas. We carefully distinguish velocity, which has both magnitude and direction, from speed, which we choose to mean the magnitude of the velocity, but which does not include the direction. We can formulate this more precisely by describing how the $x$-, $y$-, and $z$-coordinates of an object change with time. Suppose, for example, that at a certain instant an object is moving as shown in Fig. 9–1. In a given small interval of time $\Delta t$ it will move a certain distance $\Delta x$ in the $x$-direction, $\Delta y$ in the $y$-direction, and $\Delta z$ in the $z$-direction. The total effect of these three coordinate changes is a displacement $\Delta s$ along the diagonal of a parallelepiped whose sides are $\Delta x$, $\Delta y$, and $\Delta z$. In terms of the velocity, the displacement $\Delta x$ is the $x$-component of the velocity times $\Delta t$, and similarly for $\Delta y$ and $\Delta z$: \begin{equation} \label{Eq:I:9:4} \Delta x=v_x\,\Delta t,\quad \Delta y=v_y\,\Delta t,\quad \Delta z=v_z\,\Delta t. \end{equation}
\begin{equation} \begin{aligned} \Delta x&=v_x\,\Delta t,\\[.5ex] \Delta y&=v_y\,\Delta t,\\[.5ex] \Delta z&=v_z\,\Delta t. \end{aligned} \label{Eq:I:9:4} \end{equation}
|
|
1 | 9 | Newton’s Laws of Dynamics | 3 | Components of velocity, acceleration, and force | In Eq. (9.4) we have resolved the velocity into components by telling how fast the object is moving in the $x$-direction, the $y$-direction, and the $z$-direction. The velocity is completely specified, both as to magnitude and direction, if we give the numerical values of its three rectangular components: \begin{equation} \label{Eq:I:9:5} v_x=dx/dt,\quad v_y=dy/dt,\quad v_z=dz/dt. \end{equation}
\begin{equation} \begin{aligned} v_x&=dx/dt,\\[.5ex] v_y&=dy/dt,\\[.5ex] v_z&=dz/dt. \end{aligned} \label{Eq:I:9:5} \end{equation} On the other hand, the speed of the object is \begin{equation} \label{Eq:I:9:6} ds/dt=\abs{v}=\sqrt{v_x^2+v_y^2+v_z^2}. \end{equation} Next, suppose that, because of the action of a force, the velocity changes to some other direction and a different magnitude, as shown in Fig. 9–2. We can analyze this apparently complex situation rather simply if we evaluate the changes in the $x$-, $y$-, and $z$-components of velocity. The change in the component of the velocity in the $x$-direction in a time $\Delta t$ is $\Delta v_x=a_x\,\Delta t$, where $a_x$ is what we call the $x$-component of the acceleration. Similarly, we see that $\Delta v_y=a_y\,\Delta t$ and $\Delta v_z=a_z\,\Delta t$. In these terms, we see that Newton’s Second Law, in saying that the force is in the same direction as the acceleration, is really three laws, in the sense that the component of the force in the $x$-, $y$-, or $z$-direction is equal to the mass times the rate of change of the corresponding component of velocity: \begin{equation} \begin{alignedat}{5} &F_x&&=m(dv_x&&/dt)=m(d^2x&&/dt^2)=ma_x&&,\\ &F_y&&=m(dv_y&&/dt)=m(d^2y&&/dt^2)=ma_y&&,\\ &F_z&&=m(dv_z&&/dt)=m(d^2z&&/dt^2)=ma_z&&. \end{alignedat} \label{Eq:I:9:7} \end{equation} Just as the velocity and acceleration have been resolved into components by projecting a line segment representing the quantity, and its direction onto three coordinate axes, so, in the same way, a force in a given direction is represented by certain components in the $x$-, $y$-, and $z$-directions: \begin{equation} \begin{alignedat}{3} &F_x&&=F\cos\,(x&&,F),\\ &F_y&&=F\cos\,(y&&,F),\\ &F_z&&=F\cos\,(z&&,F), \end{alignedat} \label{Eq:I:9:8} \end{equation} where $F$ is the magnitude of the force and $(x,F)$ represents the angle between the $x$-axis and the direction of $F$, etc. Newton’s Second Law is given in complete form in Eq. (9.7). If we know the forces on an object and resolve them into $x$-, $y$-, and $z$-components, then we can find the motion of the object from these equations. Let us consider a simple example. Suppose there are no forces in the $y$- and $z$-directions, the only force being in the $x$-direction, say vertically. Equation (9.7) tells us that there would be changes in the velocity in the vertical direction, but no changes in the horizontal direction. This was demonstrated with a special apparatus in Chapter 7 (see Fig. 7–3). A falling body moves horizontally without any change in horizontal motion, while it moves vertically the same way as it would move if the horizontal motion were zero. In other words, motions in the $x$-, $y$-, and $z$-directions are independent if the forces are not connected. |
|
1 | 9 | Newton’s Laws of Dynamics | 4 | What is the force? | In order to use Newton’s laws, we have to have some formula for the force; these laws say pay attention to the forces. If an object is accelerating, some agency is at work; find it. Our program for the future of dynamics must be to find the laws for the force. Newton himself went on to give some examples. In the case of gravity he gave a specific formula for the force. In the case of other forces he gave some part of the information in his Third Law, which we will study in the next chapter, having to do with the equality of action and reaction. Extending our previous example, what are the forces on objects near the earth’s surface? Near the earth’s surface, the force in the vertical direction due to gravity is proportional to the mass of the object and is nearly independent of height for heights small compared with the earth’s radius $R$: $F=$ $GmM/R^2=$ $mg$, where $g=GM/R^2$ is called the acceleration of gravity. Thus the law of gravity tells us that weight is proportional to mass; the force is in the vertical direction and is the mass times $g$. Again we find that the motion in the horizontal direction is at constant velocity. The interesting motion is in the vertical direction, and Newton’s Second Law tells us \begin{equation} \label{Eq:I:9:9} mg=m(d^2x/dt^2). \end{equation} Cancelling the $m$’s, we find that the acceleration in the $x$-direction is constant and equal to $g$. This is of course the well known law of free fall under gravity, which leads to the equations \begin{alignat}{2} v_x&=v_0&&+gt,\notag\\ \label{Eq:I:9:10} x&=x_0&&+v_0t+\tfrac{1}{2}gt^2. \end{alignat} As another example, let us suppose that we have been able to build a gadget (Fig. 9–3) which applies a force proportional to the distance and directed oppositely—a spring. If we forget about gravity, which is of course balanced out by the initial stretch of the spring, and talk only about excess forces, we see that if we pull the mass down, the spring pulls up, while if we push it up the spring pulls down. This machine has been designed carefully so that the force is greater, the more we pull it up, in exact proportion to the displacement from the balanced condition, and the force upward is similarly proportional to how far we pull down. If we watch the dynamics of this machine, we see a rather beautiful motion—up, down, up, down, … The question is, will Newton’s equations correctly describe this motion? Let us see whether we can exactly calculate how it moves with this periodic oscillation, by applying Newton’s law (9.7). In the present instance, the equation is \begin{equation} \label{Eq:I:9:11} -kx=m(dv_x/dt). \end{equation} Here we have a situation where the velocity in the $x$-direction changes at a rate proportional to $x$. Nothing will be gained by retaining numerous constants, so we shall imagine either that the scale of time has changed or that there is an accident in the units, so that we happen to have $k/m=1$. Thus we shall try to solve the equation \begin{equation} \label{Eq:I:9:12} dv_x/dt=-x. \end{equation} To proceed, we must know what $v_x$ is, but of course we know that the velocity is the rate of change of the position. |
|
1 | 9 | Newton’s Laws of Dynamics | 5 | Meaning of the dynamical equations | Now let us try to analyze just what Eq. (9.12) means. Suppose that at a given time $t$ the object has a certain velocity $v_x$ and position $x$. What is the velocity and what is the position at a slightly later time $t+\epsilon$? If we can answer this question our problem is solved, for then we can start with the given condition and compute how it changes for the first instant, the next instant, the next instant, and so on, and in this way we gradually evolve the motion. To be specific, let us suppose that at the time $t=0$ we are given that $x=1$ and $v_x=0$. Why does the object move at all? Because there is a force on it when it is at any position except $x=0$. If $x>0$, that force is upward. Therefore the velocity which is zero starts to change, because of the law of motion. Once it starts to build up some velocity the object starts to move up, and so on. Now at any time $t$, if $\epsilon$ is very small, we may express the position at time $t+\epsilon$ in terms of the position at time $t$ and the velocity at time $t$ to a very good approximation as \begin{equation} \label{Eq:I:9:13} x(t+\epsilon)=x(t)+\epsilon v_x(t). \end{equation} The smaller the $\epsilon$, the more accurate this expression is, but it is still usefully accurate even if $\epsilon$ is not vanishingly small. Now what about the velocity? In order to get the velocity later, the velocity at the time $t+\epsilon$, we need to know how the velocity changes, the acceleration. And how are we going to find the acceleration? That is where the law of dynamics comes in. The law of dynamics tells us what the acceleration is. It says the acceleration is $-x$. \begin{align} \label{Eq:I:9:14} v_x(t+\epsilon)&=v_x(t)+\epsilon a_x(t)\\[1ex] \label{Eq:I:9:15} &=v_x(t)-\epsilon x(t). \end{align} Equation (9.14) is merely kinematics; it says that a velocity changes because of the presence of acceleration. But Eq. (9.15) is dynamics, because it relates the acceleration to the force; it says that at this particular time for this particular problem, you can replace the acceleration by $-x(t)$. Therefore, if we know both the $x$ and $v$ at a given time, we know the acceleration, which tells us the new velocity, and we know the new position—this is how the machinery works. The velocity changes a little bit because of the force, and the position changes a little bit because of the velocity. |
|
1 | 9 | Newton’s Laws of Dynamics | 6 | Numerical solution of the equations | Now let us really solve the problem. Suppose that we take $\epsilon=0.100$ sec. After we do all the work if we find that this is not small enough we may have to go back and do it again with $\epsilon=0.010$ sec. Starting with our initial value $x(0)=1.00$, what is $x(0.1)$? It is the old position $x(0)$ plus the velocity (which is zero) times $0.10$ sec. Thus $x(0.1)$ is still $1.00$ because it has not yet started to move. But the new velocity at $0.10$ sec will be the old velocity $v(0)=0$ plus $\epsilon$ times the acceleration. The acceleration is $-x(0)=-1.00$. Thus \begin{equation*} v(0.1) =0.00-0.10\times1.00=-0.10. \end{equation*} Now at $0.20$ sec \begin{align*} x(0.2) &=x(0.1)+\epsilon v(0.1)\\[1ex] &=1.00-0.10\times0.10=0.99 \end{align*} and \begin{align*} v(0.2) &=v(0.1)+\epsilon a(0.1)\\[1ex] &=-0.10-0.10\times1.00=-0.20. \end{align*} And so, on and on and on, we can calculate the rest of the motion, and that is just what we shall do. However, for practical purposes there are some little tricks by which we can increase the accuracy. If we continued this calculation as we have started it, we would find the motion only rather crudely because $\epsilon=0.100$ sec is rather crude, and we would have to go to a very small interval, say $\epsilon=0.01$. Then to go through a reasonable total time interval would take a lot of cycles of computation. So we shall organize the work in a way that will increase the precision of our calculations, using the same coarse interval $\epsilon=0.10$ sec. This can be done if we make a subtle improvement in the technique of the analysis. Notice that the new position is the old position plus the time interval $\epsilon$ times the velocity. But the velocity when? The velocity at the beginning of the time interval is one velocity and the velocity at the end of the time interval is another velocity. Our improvement is to use the velocity halfway between. If we know the speed now, but the speed is changing, then we are not going to get the right answer by going at the same speed as now. We should use some speed between the “now” speed and the “then” speed at the end of the interval. The same considerations also apply to the velocity: to compute the velocity changes, we should use the acceleration midway between the two times at which the velocity is to be found. Thus the equations that we shall actually use will be something like this: the position later is equal to the position before plus $\epsilon$ times the velocity at the time in the middle of the interval. Similarly, the velocity at this halfway point is the velocity at a time $\epsilon$ before (which is in the middle of the previous interval) plus $\epsilon$ times the acceleration at the time $t$. That is, we use the equations \begin{equation} \begin{aligned} x(t+\epsilon)&=x(t)+\epsilon v(t+\epsilon/2),\\ v(t+\epsilon/2)&=v(t-\epsilon/2)+\epsilon a(t),\\ a(t)&=-x(t). \end{aligned} \label{Eq:I:9:16} \end{equation} There remains only one slight problem: what is $v(\epsilon/2)$? At the start, we are given $v(0)$, not $v(-\epsilon/2)$. To get our calculation started, we shall use a special equation, namely, $v(\epsilon/2)=v(0)+(\epsilon/2)a(0)$. Solution of $dv_x/dt=-x$ Interval: $\epsilon=0.10$ sec Now we are ready to carry through our calculation. For convenience, we may arrange the work in the form of a table, with columns for the time, the position, the velocity, and the acceleration, and the in-between lines for the velocity, as shown in Table 9–1. Such a table is, of course, just a convenient way of representing the numerical values obtained from the set of equations (9.16), and in fact the equations themselves need never be written. We just fill in the various spaces in the table one by one. This table now gives us a very good idea of the motion: it starts from rest, first picks up a little upward (negative) velocity and it loses some of its distance. The acceleration is then a little bit less but it is still gaining speed. But as it goes on it gains speed more and more slowly, until as it passes $x=0$ at about $t=1.50$ sec we can confidently predict that it will keep going, but now it will be on the other side; the position $x$ will become negative, the acceleration therefore positive. Thus the speed decreases. It is interesting to compare these numbers with the function $x=\cos t$, which is done in Fig. 9–4. The agreement is within the three significant figure accuracy of our calculation! We shall see later that $x=\cos t$ is the exact mathematical solution of our equation of motion, but it is an impressive illustration of the power of numerical analysis that such an easy calculation should give such precise results. |
|
1 | 9 | Newton’s Laws of Dynamics | 7 | Planetary motions | The above analysis is very nice for the motion of an oscillating spring, but can we analyze the motion of a planet around the sun? Let us see whether we can arrive at an approximation to an ellipse for the orbit. We shall suppose that the sun is infinitely heavy, in the sense that we shall not include its motion. Suppose a planet starts at a certain place and is moving with a certain velocity; it goes around the sun in some curve, and we shall try to analyze, by Newton’s laws of motion and his law of gravitation, what the curve is. How? At a given moment it is at some position in space. If the radial distance from the sun to this position is called $r$, then we know that there is a force directed inward which, according to the law of gravity, is equal to a constant times the product of the sun’s mass and the planet’s mass divided by the square of the distance. To analyze this further we must find out what acceleration will be produced by this force. We shall need the components of the acceleration along two directions, which we call $x$ and $y$. Thus if we specify the position of the planet at a given moment by giving $x$ and $y$ (we shall suppose that $z$ is always zero because there is no force in the $z$-direction and, if there is no initial velocity $v_z$, there will be nothing to make $z$ other than zero), the force is directed along the line joining the planet to the sun, as shown in Fig. 9–5. From this figure we see that the horizontal component of the force is related to the complete force in the same manner as the horizontal distance $x$ is to the complete hypotenuse $r$, because the two triangles are similar. Also, if $x$ is positive, $F_x$ is negative. That is, $F_x/\abs{F}=-x/r$, or $F_x=$ $-\abs{F}x/r=$ $-GMmx/r^3$. Now we use the dynamical law to find that this force component is equal to the mass of the planet times the rate of change of its velocity in the $x$-direction. Thus we find the following laws: \begin{equation} \begin{aligned} m(dv_x/dt)&=-GMmx/r^3,\\ m(dv_y/dt)&=-GMmy/r^3,\\ r&=\sqrt{x^2+y^2}. \end{aligned} \label{Eq:I:9:17} \end{equation} This, then, is the set of equations we must solve. Again, in order to simplify the numerical work, we shall suppose that the unit of time, or the mass of the sun, has been so adjusted (or luck is with us) that $GM\equiv1$. For our specific example we shall suppose that the initial position of the planet is at $x=0.500$ and $y=0.000$, and that the velocity is all in the $y$-direction at the start, and is of magnitude $1.630$. Now how do we make the calculation? We again make a table with columns for the time, the $x$-position, the $x$-velocity $v_x$, and the $x$-acceleration $a_x$; then, separated by a double line, three columns for position, velocity, and acceleration in the $y$-direction. In order to get the accelerations we are going to need Eq. (9.17); it tells us that the acceleration in the $x$-direction is $-x/r^3$, and the acceleration in the $y$-direction is $-y/r^3$, and that $r$ is the square root of $x^2+y^2$. Thus, given $x$ and $y$, we must do a little calculating on the side, taking the square root of the sum of the squares to find $r$ and then, to get ready to calculate the two accelerations, it is useful also to evaluate $1/r^3$. This work can be done rather easily by using a table of squares, cubes, and reciprocals: then we need only multiply $x$ by $1/r^3$, which we do on a slide rule. Our calculation thus proceeds by the following steps, using time intervals $\epsilon=0.100$: Initial values at $t=0$: \begin{alignat*}{2} x(0)&=0.500&\qquad\qquad y(0)&=\phantom{+}0.000\\[.5ex] v_x(0)&=0.000&\qquad\qquad v_y(0)&=+1.630 \end{alignat*} From these we find: \begin{alignat*}{2} r(0)&=\phantom{-}0.500&\qquad 1/r^3(0)&=8.000\\[.5ex] a_x(0)&=-4.000&\qquad a_y(0)&=0.000 \end{alignat*} Thus we may calculate the velocities $v_x(0.05)$ and $v_y(0.05)$: \begin{align*} v_x(0.05) &= 0.000 - 4.000 \times 0.050 = -0.200;\\[1ex] v_y(0.05) &= 1.630 + 0.000 \times 0.050 = \phantom{-}1.630. \end{align*} Now our main calculations begin: \begin{alignat*}{2} x(0.1)&=0.500-0.20 \times 0.1&&=\phantom{-}0.480\\[.5ex] y(0.1)&=0.0+1.63 \times 0.1 &&=\phantom{-}0.163\\[.5ex] r(0.1)&=\sqrt{0.480^2+0.163^2}&&=\phantom{-}0.507\\[.5ex] 1/r^3(0.1)&=7.677 &&\\[.5ex] a_x(0.1)&=-0.480 \times 7.677 &&=-3.685\\[.5ex] a_y(0.1)&=-0.163 \times 7.677 &&=-1.250\\[.5ex] v_x(0.15)&=-0.200-3.685\times0.1 &&=-0.568\\[.5ex] v_y(0.15)&=1.630-1.250\times0.1 &&=\phantom{-}1.505\\[.5ex] x(0.2)&=0.480-0.568\times 0.1&&=\phantom{-}0.423\\[.5ex] y(0.2)&=0.163+1.505\times0.1&&=\phantom{-}0.313\\[.5ex] &\qquad\qquad\text{etc.}&& \end{alignat*} In this way we obtain the values given in Table 9–2, and in $20$ steps or so we have chased the planet halfway around the sun! In Fig. 9–6 are plotted the $x$- and $y$-coordinates given in Table 9–2. The dots represent the positions at the succession of times a tenth of a unit apart; we see that at the start the planet moves rapidly and at the end it moves slowly, and so the shape of the curve is determined. Thus we see that we really do know how to calculate the motion of planets! Solution of $dv_x/dt=-x/r^3$, $dv_y/dt=-y/r^3$, $r=\sqrt{x^2+y^2}$. Interval: $\epsilon=0.100$ Orbit $v_y=1.63$ $v_x=0$ $x=0.5$ $y=0$ at $t=0$ Crossed $x$-axis at $2.101$ sec, $ \therefore$ period${}=4.20$ sec. $v_x=0$ at $2.086$ sec. Cross $x$ at $-1.022$, $ \therefore$ semimajor axis${}=$ $\dfrac{1.022+0.500}{2}$ $=0.761$. $v_y=-0.797$. Predicted time $\pi(0.761)^{3/2}=$ $\pi(0.663)=$ $2.082$. Now let us see how we can calculate the motion of Neptune, Jupiter, Uranus, or any other planet. If we have a great many planets, and let the sun move too, can we do the same thing? Of course we can. We calculate the force on a particular planet, let us say planet number $i$, which has a position $x_i,y_i,z_i$ ($i=1$ may represent the sun, $i=2$ Mercury, $i=3$ Venus, and so on). We must know the positions of all the planets. The force acting on one is due to all the other bodies which are located, let us say, at positions $x_j,y_j,z_j$. Therefore the equations are \begin{align} m_i\,\ddt{v_{ix}}{t}&= \sum_{j=1}^N-\frac{Gm_im_j(x_i-x_j)}{r_{ij}^3},\notag\\ \label{Eq:I:9:18} m_i\,\ddt{v_{iy}}{t}&= \sum_{j=1}^N-\frac{Gm_im_j( y_i- y_j )}{r_{ij}^3},\\ m_i\,\ddt{v_{iz}}{t}&= \sum_{j=1}^N-\frac{Gm_im_j( z_i- z_j )}{r_{ij}^3}.\notag \end{align} Further, we define $r_{ij}$ as the distance between the two planets $i$ and $j$; this is equal to \begin{equation} \label{Eq:I:9:19} r_{ij}=\sqrt{(x_i-x_j)^2+(y_i-y_j)^2+(z_i-z_j)^2}. \end{equation} Also, $\sum$ means a sum over all values of $j$—all other bodies—except, of course, for $j=i$. Thus all we have to do is to make more columns, lots more columns. We need nine columns for the motions of Jupiter, nine for the motions of Saturn, and so on. Then when we have all initial positions and velocities we can calculate all the accelerations from Eq. (9.18) by first calculating all the distances, using Eq. (9.19). How long will it take to do it? If you do it at home, it will take a very long time! But in modern times we have machines which do arithmetic very rapidly; a very good computing machine may take $1$ microsecond, that is, a millionth of a second, to do an addition. To do a multiplication takes longer, say $10$ microseconds. It may be that in one cycle of calculation, depending on the problem, we may have $30$ multiplications, or something like that, so one cycle will take $300$ microseconds. That means that we can do $3000$ cycles of computation per second. In order to get an accuracy, of, say, one part in a billion, we would need $4\times10^5$ cycles to correspond to one revolution of a planet around the sun. That corresponds to a computation time of $130$ seconds or about two minutes. Thus it takes only two minutes to follow Jupiter around the sun, with all the perturbations of all the planets correct to one part in a billion, by this method! (It turns out that the error varies about as the square of the interval $\epsilon$. If we make the interval a thousand times smaller, it is a million times more accurate. So, let us make the interval $10{,}000$ times smaller.) So, as we said, we began this chapter not knowing how to calculate even the motion of a mass on a spring. Now, armed with the tremendous power of Newton’s laws, we can not only calculate such simple motions but also, given only a machine to handle the arithmetic, even the tremendously complex motions of the planets, to as high a degree of precision as we wish! |
|
1 | 10 | Conservation of Momentum | 1 | Newton’s Third Law | On the basis of Newton’s second law of motion, which gives the relation between the acceleration of any body and the force acting on it, any problem in mechanics can be solved in principle. For example, to determine the motion of a few particles, one can use the numerical method developed in the preceding chapter. But there are good reasons to make a further study of Newton’s laws. First, there are quite simple cases of motion which can be analyzed not only by numerical methods, but also by direct mathematical analysis. For example, although we know that the acceleration of a falling body is $32$ ft/sec², and from this fact could calculate the motion by numerical methods, it is much easier and more satisfactory to analyze the motion and find the general solution, $s=s_0+v_0t+16t^2$. In the same way, although we can work out the positions of a harmonic oscillator by numerical methods, it is also possible to show analytically that the general solution is a simple cosine function of $t$, and so it is unnecessary to go to all that arithmetical trouble when there is a simple and more accurate way to get the result. In the same manner, although the motion of one body around the sun, determined by gravitation, can be calculated point by point by the numerical methods of Chapter 9, which show the general shape of the orbit, it is nice also to get the exact shape, which analysis reveals as a perfect ellipse. Unfortunately, there are really very few problems which can be solved exactly by analysis. In the case of the harmonic oscillator, for example, if the spring force is not proportional to the displacement, but is something more complicated, one must fall back on the numerical method. Or if there are two bodies going around the sun, so that the total number of bodies is three, then analysis cannot produce a simple formula for the motion, and in practice the problem must be done numerically. That is the famous three-body problem, which so long challenged human powers of analysis; it is very interesting how long it took people to appreciate the fact that perhaps the powers of mathematical analysis were limited and it might be necessary to use the numerical methods. Today an enormous number of problems that cannot be done analytically are solved by numerical methods, and the old three-body problem, which was supposed to be so difficult, is solved as a matter of routine in exactly the same manner that was described in the preceding chapter, namely, by doing enough arithmetic. However, there are also situations where both methods fail: the simple problems we can do by analysis, and the moderately difficult problems by numerical, arithmetical methods, but the very complicated problems we cannot do by either method. A complicated problem is, for example, the collision of two automobiles, or even the motion of the molecules of a gas. There are countless particles in a cubic millimeter of gas, and it would be ridiculous to try to make calculations with so many variables (about $10^{17}$—a hundred million billion). Anything like the motion of the molecules or atoms of a gas or a block of iron, or the motion of the stars in a globular cluster, instead of just two or three planets going around the sun—such problems we cannot do directly, so we have to seek other means. In the situations in which we cannot follow details, we need to know some general properties, that is, general theorems or principles which are consequences of Newton’s laws. One of these is the principle of conservation of energy, which was discussed in Chapter 4. Another is the principle of conservation of momentum, the subject of this chapter. Another reason for studying mechanics further is that there are certain patterns of motion that are repeated in many different circumstances, so it is good to study these patterns in one particular circumstance. For example, we shall study collisions; different kinds of collisions have much in common. In the flow of fluids, it does not make much difference what the fluid is, the laws of the flow are similar. Other problems that we shall study are vibrations and oscillations and, in particular, the peculiar phenomena of mechanical waves—sound, vibrations of rods, and so on. In our discussion of Newton’s laws it was explained that these laws are a kind of program that says “Pay attention to the forces,” and that Newton told us only two things about the nature of forces. In the case of gravitation, he gave us the complete law of the force. In the case of the very complicated forces between atoms, he was not aware of the right laws for the forces; however, he discovered one rule, one general property of forces, which is expressed in his Third Law, and that is the total knowledge that Newton had about the nature of forces—the law of gravitation and this principle, but no other details. This principle is that action equals reaction. What is meant is something of this kind: Suppose we have two small bodies, say particles, and suppose that the first one exerts a force on the second one, pushing it with a certain force. Then, simultaneously, according to Newton’s Third Law, the second particle will push on the first with an equal force, in the opposite direction; furthermore, these forces effectively act in the same line. This is the hypothesis, or law, that Newton proposed, and it seems to be quite accurate, though not exact (we shall discuss the errors later). For the moment we shall take it to be true that action equals reaction. Of course, if there is a third particle, not on the same line as the other two, the law does not mean that the total force on the first one is equal to the total force on the second, since the third particle, for instance, exerts its own push on each of the other two. The result is that the total effect on the first two is in some other direction, and the forces on the first two particles are, in general, neither equal nor opposite. However, the forces on each particle can be resolved into parts, there being one contribution or part due to each other interacting particle. Then each pair of particles has corresponding components of mutual interaction that are equal in magnitude and opposite in direction. |
|
1 | 10 | Conservation of Momentum | 2 | Conservation of momentum | Now what are the interesting consequences of the above relationship? Suppose, for simplicity, that we have just two interacting particles, possibly of different mass, and numbered $1$ and $2$. The forces between them are equal and opposite; what are the consequences? According to Newton’s Second Law, force is the time rate of change of the momentum, so we conclude that the rate of change of momentum $p_1$ of particle $1$ is equal to minus the rate of change of momentum $p_2$ of particle $2$, or \begin{equation} \label{Eq:I:10:1} dp_1/dt=-dp_2/dt. \end{equation} Now if the rate of change is always equal and opposite, it follows that the total change in the momentum of particle $1$ is equal and opposite to the total change in the momentum of particle $2$; this means that if we add the momentum of particle $1$ to the momentum of particle $2$, the rate of change of the sum of these, due to the mutual forces (called internal forces) between particles, is zero; that is \begin{equation} \label{Eq:I:10:2} d(p_1+p_2)/dt=0. \end{equation} There is assumed to be no other force in the problem. If the rate of change of this sum is always zero, that is just another way of saying that the quantity $(p_1+p_2)$ does not change. (This quantity is also written $m_1v_1+m_2v_2$, and is called the total momentum of the two particles.) We have now obtained the result that the total momentum of the two particles does not change because of any mutual interactions between them. This statement expresses the law of conservation of momentum in that particular example. We conclude that if there is any kind of force, no matter how complicated, between two particles, and we measure or calculate $m_1v_1+m_2v_2$, that is, the sum of the two momenta, both before and after the forces act, the results should be equal, i.e., the total momentum is a constant. If we extend the argument to three or more interacting particles in more complicated circumstances, it is evident that so far as internal forces are concerned, the total momentum of all the particles stays constant, since an increase in momentum of one, due to another, is exactly compensated by the decrease of the second, due to the first. That is, all the internal forces will balance out, and therefore cannot change the total momentum of the particles. Then if there are no forces from the outside (external forces), there are no forces that can change the total momentum; hence the total momentum is a constant. It is worth describing what happens if there are forces that do not come from the mutual actions of the particles in question: suppose we isolate the interacting particles. If there are only mutual forces, then, as before, the total momentum of the particles does not change, no matter how complicated the forces. On the other hand, suppose there are also forces coming from the particles outside the isolated group. Any force exerted by outside bodies on inside bodies, we call an external force. We shall later demonstrate that the sum of all external forces equals the rate of change of the total momentum of all the particles inside, a very useful theorem. The conservation of the total momentum of a number of interacting particles can be expressed as \begin{equation} \label{Eq:I:10:3} m_1v_1+m_2v_2+m_3v_3+\dotsb=\text{a constant}, \end{equation} if there are no net external forces. Here the masses and corresponding velocities of the particles are numbered $1$, $2$, $3$, $4$, … The general statement of Newton’s Second Law for each particle, \begin{equation} \label{Eq:I:10:4} F=\ddt{}{t}(mv), \end{equation} is true specifically for the components of force and momentum in any given direction; thus the $x$-component of the force on a particle is equal to the $x$-component of the rate of change of momentum of that particle, or \begin{equation} \label{Eq:I:10:5} F_x=\ddt{}{t}(mv_x), \end{equation} and similarly for the $y$- and $z$-directions. Therefore Eq. (10.3) is really three equations, one for each direction. In addition to the law of conservation of momentum, there is another interesting consequence of Newton’s Second Law, to be proved later, but merely stated now. This principle is that the laws of physics will look the same whether we are standing still or moving with a uniform speed in a straight line. For example, a child bouncing a ball in an airplane finds that the ball bounces the same as though he were bouncing it on the ground. Even though the airplane is moving with a very high velocity, unless it changes its velocity, the laws look the same to the child as they do when the airplane is standing still. This is the so-called relativity principle. As we use it here we shall call it “Galilean relativity” to distinguish it from the more careful analysis made by Einstein, which we shall study later. We have just derived the law of conservation of momentum from Newton’s laws, and we could go on from here to find the special laws that describe impacts and collisions. But for the sake of variety, and also as an illustration of a kind of reasoning that can be used in physics in other circumstances where, for example, one might not know Newton’s laws and might take a different approach, we shall discuss the laws of impacts and collisions from a completely different point of view. We shall base our discussion on the principle of Galilean relativity, stated above, and shall end up with the law of conservation of momentum. We shall start by assuming that nature would look the same if we run along at a certain speed and watch it as it would if we were standing still. Before discussing collisions in which two bodies collide and stick together, or come together and bounce apart, we shall first consider two bodies that are held together by a spring or something else, and are then suddenly released and pushed by the spring or perhaps by a little explosion. Further, we shall consider motion in only one direction. First, let us suppose that the two objects are exactly the same, are nice symmetrical objects, and then we have a little explosion between them. After the explosion, one of the bodies will be moving, let us say toward the right, with a velocity $v$. Then it appears reasonable that the other body is moving toward the left with a velocity $v$, because if the objects are alike there is no reason for right or left to be preferred and so the bodies would do something that is symmetrical. This is an illustration of a kind of thinking that is very useful in many problems but would not be brought out if we just started with the formulas. The first result from our experiment is that equal objects will have equal speed, but now suppose that we have two objects made of different materials, say copper and aluminum, and we make the two masses equal. We shall now suppose that if we do the experiment with two masses that are equal, even though the objects are not identical, the velocities will be equal. Someone might object: “But you know, you could do it backwards, you did not have to suppose that. You could define equal masses to mean two masses that acquire equal velocities in this experiment.” We follow that suggestion and make a little explosion between the copper and a very large piece of aluminum, so heavy that the copper flies out and the aluminum hardly budges. That is too much aluminum, so we reduce the amount until there is just a very tiny piece, then when we make the explosion the aluminum goes flying away, and the copper hardly budges. That is not enough aluminum. Evidently there is some right amount in between; so we keep adjusting the amount until the velocities come out equal. Very well then—let us turn it around, and say that when the velocities are equal, the masses are equal. This appears to be just a definition, and it seems remarkable that we can transform physical laws into mere definitions. Nevertheless, there are some physical laws involved, and if we accept this definition of equal masses, we immediately find one of the laws, as follows. Suppose we know from the foregoing experiment that two pieces of matter, $A$ and $B$ (of copper and aluminum), have equal masses, and we compare a third body, say a piece of gold, with the copper in the same manner as above, making sure that its mass is equal to the mass of the copper. If we now make the experiment between the aluminum and the gold, there is nothing in logic that says these masses must be equal; however, the experiment shows that they actually are. So now, by experiment, we have found a new law. A statement of this law might be: If two masses are each equal to a third mass (as determined by equal velocities in this experiment), then they are equal to each other. (This statement does not follow at all from a similar statement used as a postulate regarding mathematical quantities.) From this example we can see how quickly we start to infer things if we are careless. It is not just a definition to say the masses are equal when the velocities are equal, because to say the masses are equal is to imply the mathematical laws of equality, which in turn makes a prediction about an experiment. As a second example, suppose that $A$ and $B$ are found to be equal by doing the experiment with one strength of explosion, which gives a certain velocity; if we then use a stronger explosion, will it be true or not true that the velocities now obtained are equal? Again, in logic there is nothing that can decide this question, but experiment shows that it is true. So, here is another law, which might be stated: If two bodies have equal masses, as measured by equal velocities at one velocity, they will have equal masses when measured at another velocity. From these examples we see that what appeared to be only a definition really involved some laws of physics. In the development that follows we shall assume it is true that equal masses have equal and opposite velocities when an explosion occurs between them. We shall make another assumption in the inverse case: If two identical objects, moving in opposite directions with equal velocities, collide and stick together by some kind of glue, then which way will they be moving after the collision? This is again a symmetrical situation, with no preference between right and left, so we assume that they stand still. We shall also suppose that any two objects of equal mass, even if the objects are made of different materials, which collide and stick together, when moving with the same velocity in opposite directions will come to rest after the collision. |
|
1 | 10 | Conservation of Momentum | 3 | Momentum conserved! | We can verify the above assumptions experimentally: first, that if two stationary objects of equal mass are separated by an explosion they will move apart with the same speed, and second, if two objects of equal mass, coming together with the same speed, collide and stick together they will stop. This we can do by means of a marvelous invention called an air trough,1 which gets rid of friction, the thing which continually bothered Galileo (Fig. 10–1). He could not do experiments by sliding things because they do not slide freely, but, by adding a magic touch, we can today get rid of friction. Our objects will slide without difficulty, on and on at a constant velocity, as advertised by Galileo. This is done by supporting the objects on air. Because air has very low friction, an object glides along with practically constant velocity when there is no applied force. First, we use two glide blocks which have been made carefully to have the same weight, or mass (their weight was measured really, but we know that this weight is proportional to the mass), and we place a small explosive cap in a closed cylinder between the two blocks (Fig. 10–2). We shall start the blocks from rest at the center point of the track and force them apart by exploding the cap with an electric spark. What should happen? If the speeds are equal when they fly apart, they should arrive at the ends of the trough at the same time. On reaching the ends they will both bounce back with practically opposite velocity, and will come together and stop at the center where they started. It is a good test; when it is actually done the result is just as we have described (Fig. 10–3). Now the next thing we would like to figure out is what happens in a less simple situation. Suppose we have two equal masses, one moving with velocity $v$ and the other standing still, and they collide and stick; what is going to happen? There is a mass $2m$ altogether when we are finished, drifting with an unknown velocity. What velocity? That is the problem. To find the answer, we make the assumption that if we ride along in a car, physics will look the same as if we are standing still. We start with the knowledge that two equal masses, moving in opposite directions with equal speeds $v$, will stop dead when they collide. Now suppose that while this happens, we are riding by in an automobile, at a velocity $-v$. Then what does it look like? Since we are riding along with one of the two masses which are coming together, that one appears to us to have zero velocity. The other mass, however, going the other way with velocity $v$, will appear to be coming toward us at a velocity $2v$ (Fig. 10–4). Finally, the combined masses after collision will seem to be passing by with velocity $v$. We therefore conclude that an object with velocity $2v$, hitting an equal one at rest, will end up with velocity $v$, or what is mathematically exactly the same, an object with velocity $v$ hitting and sticking to one at rest will produce an object moving with velocity $v/2$. Note that if we multiply the mass and the velocity beforehand and add them together, $mv+0$, we get the same answer as when we multiply the mass and the velocity of everything afterwards, $2m$ times $v/2$. So that tells us what happens when a mass of velocity $v$ hits one standing still. In exactly the same manner we can deduce what happens when equal objects having any two velocities hit each other. Suppose we have two equal bodies with velocities $v_1$ and $v_2$, respectively, which collide and stick together. What is their velocity $v$ after the collision? Again we ride by in an automobile, say at velocity $v_2$, so that one body appears to be at rest. The other then appears to have a velocity $v_1-v_2$, and we have the same case that we had before. When it is all finished they will be moving at $\tfrac{1}{2}(v_1-v_2)$ with respect to the car. What then is the actual speed on the ground? It is $v=\tfrac{1}{2}(v_1-v_2)+v_2$ or $\tfrac{1}{2}(v_1+v_2)$ (Fig. 10–5). Again we note that \begin{equation} \label{Eq:I:10:6} mv_1+mv_2=2m(v_1+v_2)/2. \end{equation} Thus, using this principle, we can analyze any kind of collision in which two bodies of equal mass hit each other and stick. In fact, although we have worked only in one dimension, we can find out a great deal about much more complicated collisions by imagining that we are riding by in a car in some oblique direction. The principle is the same, but the details get somewhat complicated. In order to test experimentally whether an object moving with velocity $v$, colliding with an equal one at rest, forms an object moving with velocity $v/2$, we may perform the following experiment with our air-trough apparatus. We place in the trough three equally massive objects, two of which are initially joined together with our explosive cylinder device, the third being very near to but slightly separated from these and provided with a sticky bumper so that it will stick to another object which hits it. Now, a moment after the explosion, we have two objects of mass $m$ moving with equal and opposite velocities $v$. A moment after that, one of these collides with the third object and makes an object of mass $2m$ moving, so we believe, with velocity $v/2$. How do we test whether it is really $v/2$? By arranging the initial positions of the masses on the trough so that the distances to the ends are not equal, but are in the ratio $2:1$. Thus our first mass, which continues to move with velocity $v$, should cover twice as much distance in a given time as the two which are stuck together (allowing for the small distance travelled by the second object before it collided with the third). The mass $m$ and the mass $2m$ should reach the ends at the same time, and when we try it, we find that they do (Fig. 10–6). The next problem that we want to work out is what happens if we have two different masses. Let us take a mass $m$ and a mass $2m$ and apply our explosive interaction. What will happen then? If, as a result of the explosion, $m$ moves with velocity $v$, with what velocity does $2m$ move? The experiment we have just done may be repeated with zero separation between the second and third masses, and when we try it we get the same result, namely, the reacting masses $m$ and $2m$ attain velocities $-v$ and $v/2$. Thus the direct reaction between $m$ and $2m$ gives the same result as the symmetrical reaction between $m$ and $m$, followed by a collision between $m$ and a third mass $m$ in which they stick together. Furthermore, we find that the masses $m$ and $2m$ returning from the ends of the trough, with their velocities (nearly) exactly reversed, stop dead if they stick together. Now the next question we may ask is this. What will happen if a mass $m$ with velocity $v$, say, hits and sticks to another mass $2m$ at rest? This is very easy to answer using our principle of Galilean relativity, for we simply watch the collision which we have just described from a car moving with velocity $-v/2$ (Fig. 10–7). From the car, the velocities are \begin{equation*} v_1'=v-v(\text{car})=v+v/2=3v/2 \end{equation*} and \begin{equation*} v_2'=-v/2-v(\text{car})=-v/2+v/2=0. \end{equation*} After the collision, the mass $3m$ appears to us to be moving with velocity $v/2$. Thus we have the answer, i.e., the ratio of velocities before and after collision is $3$ to $1$: if an object of mass $m$ collides with a stationary object of mass $2m$, then the whole thing moves off, stuck together, with a velocity $1/3$ as much. The general rule again is that the sum of the products of the masses and the velocities stays the same: $mv+0$ equals $3m$ times $v/3$, so we are gradually building up the theorem of the conservation of momentum, piece by piece. Now we have one against two. Using the same arguments, we can predict the result of one against three, two against three, etc. The case of two against three, starting from rest, is shown in Fig. 10–8. In every case we find that the mass of the first object times its velocity, plus the mass of the second object times its velocity, is equal to the total mass of the final object times its velocity. These are all examples, then, of the conservation of momentum. Starting from simple, symmetrical cases, we have demonstrated the law for more complex cases. We could, in fact, do it for any rational mass ratio, and since every ratio is exceedingly close to a rational ratio, we can handle every ratio as precisely as we wish. |
|
1 | 10 | Conservation of Momentum | 4 | Momentum and energy | All the foregoing examples are simple cases where the bodies collide and stick together, or were initially stuck together and later separated by an explosion. However, there are situations in which the bodies do not cohere, as, for example, two bodies of equal mass which collide with equal speeds and then rebound. For a brief moment they are in contact and both are compressed. At the instant of maximum compression they both have zero velocity and energy is stored in the elastic bodies, as in a compressed spring. This energy is derived from the kinetic energy the bodies had before the collision, which becomes zero at the instant their velocity is zero. The loss of kinetic energy is only momentary, however. The compressed condition is analogous to the cap that releases energy in an explosion. The bodies are immediately decompressed in a kind of explosion, and fly apart again; but we already know that case—the bodies fly apart with equal speeds. However, this speed of rebound is less, in general, than the initial speed, because not all the energy is available for the explosion, depending on the material. If the material is putty no kinetic energy is recovered, but if it is something more rigid, some kinetic energy is usually regained. In the collision the rest of the kinetic energy is transformed into heat and vibrational energy—the bodies are hot and vibrating. The vibrational energy also is soon transformed into heat. It is possible to make the colliding bodies from highly elastic materials, such as steel, with carefully designed spring bumpers, so that the collision generates very little heat and vibration. In these circumstances the velocities of rebound are practically equal to the initial velocities; such a collision is called elastic. That the speeds before and after an elastic collision are equal is not a matter of conservation of momentum, but a matter of conservation of kinetic energy. That the velocities of the bodies rebounding after a symmetrical collision are equal to and opposite each other, however, is a matter of conservation of momentum. We might similarly analyze collisions between bodies of different masses, different initial velocities, and various degrees of elasticity, and determine the final velocities and the loss of kinetic energy, but we shall not go into the details of these processes. Elastic collisions are especially interesting for systems that have no internal “gears, wheels, or parts.” Then when there is a collision there is nowhere for the energy to be impounded, because the objects that move apart are in the same condition as when they collided. Therefore, between very elementary objects, the collisions are always elastic or very nearly elastic. For instance, the collisions between atoms or molecules in a gas are said to be perfectly elastic. Although this is an excellent approximation, even such collisions are not perfectly elastic; otherwise one could not understand how energy in the form of light or heat radiation could come out of a gas. Once in a while, in a gas collision, a low-energy infrared ray is emitted, but this occurrence is very rare and the energy emitted is very small. So, for most purposes, collisions of molecules in gases are considered to be perfectly elastic. As an interesting example, let us consider an elastic collision between two objects of equal mass. If they come together with the same speed, they would come apart at that same speed, by symmetry. But now look at this in another circumstance, in which one of them is moving with velocity $v$ and the other one is at rest. What happens? We have been through this before. We watch the symmetrical collision from a car moving along with one of the objects, and we find that if a stationary body is struck elastically by another body of exactly the same mass, the moving body stops, and the one that was standing still now moves away with the same speed that the other one had; the bodies simply exchange velocities. This behavior can easily be demonstrated with a suitable impact apparatus. More generally, if both bodies are moving, with different velocities, they simply exchange velocity at impact. Another example of an almost elastic interaction is magnetism. If we arrange a pair of U-shaped magnets in our glide blocks, so that they repel each other, when one drifts quietly up to the other, it pushes it away and stands perfectly still, and now the other goes along, frictionlessly. The principle of conservation of momentum is very useful, because it enables us to solve many problems without knowing the details. We did not know the details of the gas motions in the cap explosion, yet we could predict the velocities with which the bodies came apart, for example. Another interesting example is rocket propulsion. A rocket of large mass, $M$, ejects a small piece, of mass $m$, with a terrific velocity $V$ relative to the rocket. After this the rocket, if it were originally standing still, will be moving with a small velocity, $v$. Using the principle of conservation of momentum, we can calculate this velocity to be \begin{equation*} v=\frac{m}{M}\cdot V. \end{equation*} So long as material is being ejected, the rocket continues to pick up speed. Rocket propulsion is essentially the same as the recoil of a gun: there is no need for any air to push against. |
|
1 | 10 | Conservation of Momentum | 5 | Relativistic momentum | In modern times the law of conservation of momentum has undergone certain modifications. However, the law is still true today, the modifications being mainly in the definitions of things. In the theory of relativity it turns out that we do have conservation of momentum; the particles have mass and the momentum is still given by $mv$, the mass times the velocity, but the mass changes with the velocity, hence the momentum also changes. The mass varies with velocity according to the law \begin{equation} \label{Eq:I:10:7} m=\frac{m_0}{\sqrt{1-v^2/c^2}}, \end{equation} where $m_0$ is the mass of the body at rest and $c$ is the speed of light. It is easy to see from the formula that there is negligible difference between $m$ and $m_0$ unless $v$ is very large, and that for ordinary velocities the expression for momentum reduces to the old formula. The components of momentum for a single particle are written as \begin{equation} \label{Eq:I:10:8} p_x=\frac{m_0v_x}{\sqrt{1-v^2/c^2}},\quad p_y=\frac{m_0v_y}{\sqrt{1-v^2/c^2}},\quad p_z=\frac{m_0v_z}{\sqrt{1-v^2/c^2}}, \end{equation}
\begin{equation} \begin{aligned} p_x&=\frac{m_0v_x}{\sqrt{1-v^2/c^2}},\\[1ex] p_y&=\frac{m_0v_y}{\sqrt{1-v^2/c^2}},\\[1ex] p_z&=\frac{m_0v_z}{\sqrt{1-v^2/c^2}}, \end{aligned} \label{Eq:I:10:8} \end{equation} where $v^2=v_x^2+v_y^2+v_z^2$. If the $x$-components are summed over all the interacting particles, both before and after a collision, the sums are equal; that is, momentum is conserved in the $x$-direction. The same holds true in any direction. In Chapter 4 we saw that the law of conservation of energy is not valid unless we recognize that energy appears in different forms, electrical energy, mechanical energy, radiant energy, heat energy, and so on. In some of these cases, heat energy for example, the energy might be said to be “hidden.” This example might suggest the question, “Are there also hidden forms of momentum—perhaps heat momentum?” The answer is that it is very hard to hide momentum for the following reasons. The random motions of the atoms of a body furnish a measure of heat energy, if the squares of the velocities are summed. This sum will be a positive result, having no directional character. The heat is there, whether or not the body moves as a whole, and conservation of energy in the form of heat is not very obvious. On the other hand, if one sums the velocities, which have direction, and finds a result that is not zero, that means that there is a drift of the entire body in some particular direction, and such a gross momentum is readily observed. Thus there is no random internal lost momentum, because the body has net momentum only when it moves as a whole. Therefore momentum, as a mechanical quantity, is difficult to hide. Nevertheless, momentum can be hidden—in the electromagnetic field, for example. This case is another effect of relativity. One of the propositions of Newton was that interactions at a distance are instantaneous. It turns out that such is not the case; in situations involving electrical forces, for instance, if an electrical charge at one location is suddenly moved, the effects on another charge, at another place, do not appear instantaneously—there is a little delay. In those circumstances, even if the forces are equal the momentum will not check out; there will be a short time during which there will be trouble, because for a while the first charge will feel a certain reaction force, say, and will pick up some momentum, but the second charge has felt nothing and has not yet changed its momentum. It takes time for the influence to cross the intervening distance, which it does at $186{,}000$ miles a second. In that tiny time the momentum of the particles is not conserved. Of course after the second charge has felt the effect of the first one and all is quieted down, the momentum equation will check out all right, but during that small interval momentum is not conserved. We represent this by saying that during this interval there is another kind of momentum besides that of the particle, $mv$, and that is momentum in the electromagnetic field. If we add the field momentum to the momentum of the particles, then momentum is conserved at any moment all the time. The fact that the electromagnetic field can possess momentum and energy makes that field very real, and so, for better understanding, the original idea that there are just the forces between particles has to be modified to the idea that a particle makes a field, and a field acts on another particle, and the field itself has such familiar properties as energy content and momentum, just as particles can have. To take another example: an electromagnetic field has waves, which we call light; it turns out that light also carries momentum with it, so when light impinges on an object it carries in a certain amount of momentum per second; this is equivalent to a force, because if the illuminated object is picking up a certain amount of momentum per second, its momentum is changing and the situation is exactly the same as if there were a force on it. Light can exert pressure by bombarding an object; this pressure is very small, but with sufficiently delicate apparatus it is measurable. Now in quantum mechanics it turns out that momentum is a different thing—it is no longer $mv$. It is hard to define exactly what is meant by the velocity of a particle, but momentum still exists. In quantum mechanics the difference is that when the particles are represented as particles, the momentum is still $mv$, but when the particles are represented as waves, the momentum is measured by the number of waves per centimeter: the greater this number of waves, the greater the momentum. In spite of the differences, the law of conservation of momentum holds also in quantum mechanics. Even though the law $F=ma$ is false, and all the derivations of Newton were wrong for the conservation of momentum, in quantum mechanics, nevertheless, in the end, that particular law maintains itself! |
|
1 | 11 | Vectors | 1 | Symmetry in physics | In this chapter we introduce a subject that is technically known in physics as symmetry in physical law. The word “symmetry” is used here with a special meaning, and therefore needs to be defined. When is a thing symmetrical—how can we define it? When we have a picture that is symmetrical, one side is somehow the same as the other side. Professor Hermann Weyl has given this definition of symmetry: a thing is symmetrical if one can subject it to a certain operation and it appears exactly the same after the operation. For instance, if we look at a silhouette of a vase that is left-and-right symmetrical, then turn it $180^\circ$ around the vertical axis, it looks the same. We shall adopt the definition of symmetry in Weyl’s more general form, and in that form we shall discuss symmetry of physical laws. Suppose we build a complex machine in a certain place, with a lot of complicated interactions, and balls bouncing around with forces between them, and so on. Now suppose we build exactly the same kind of equipment at some other place, matching part by part, with the same dimensions and the same orientation, everything the same only displaced laterally by some distance. Then, if we start the two machines in the same initial circumstances, in exact correspondence, we ask: will one machine behave exactly the same as the other? Will it follow all the motions in exact parallelism? Of course the answer may well be no, because if we choose the wrong place for our machine it might be inside a wall and interferences from the wall would make the machine not work. All of our ideas in physics require a certain amount of common sense in their application; they are not purely mathematical or abstract ideas. We have to understand what we mean when we say that the phenomena are the same when we move the apparatus to a new position. We mean that we move everything that we believe is relevant; if the phenomenon is not the same, we suggest that something relevant has not been moved, and we proceed to look for it. If we never find it, then we claim that the laws of physics do not have this symmetry. On the other hand, we may find it—we expect to find it—if the laws of physics do have this symmetry; looking around, we may discover, for instance, that the wall is pushing on the apparatus. The basic question is, if we define things well enough, if all the essential forces are included inside the apparatus, if all the relevant parts are moved from one place to another, will the laws be the same? Will the machinery work the same way? It is clear that what we want to do is to move all the equipment and essential influences, but not everything in the world—planets, stars, and all—for if we do that, we have the same phenomenon again for the trivial reason that we are right back where we started. No, we cannot move everything. But it turns out in practice that with a certain amount of intelligence about what to move, the machinery will work. In other words, if we do not go inside a wall, if we know the origin of the outside forces, and arrange that those are moved too, then the machinery will work the same in one location as in another. |
|
1 | 11 | Vectors | 2 | Translations | We shall limit our analysis to just mechanics, for which we now have sufficient knowledge. In previous chapters we have seen that the laws of mechanics can be summarized by a set of three equations for each particle: \begin{equation} \label{Eq:I:11:1} m(d^2x/dt^2)=F_x,\quad m(d^2y/dt^2)=F_y,\quad m(d^2z/dt^2)=F_z. \end{equation}
\begin{equation} \begin{aligned} m(d^2x/dt^2)=F_x,\\[.75ex] m(d^2y/dt^2)=F_y,\\[.75ex] m(d^2z/dt^2)=F_z. \end{aligned} \label{Eq:I:11:1} \end{equation} Now this means that there exists a way to measure $x$, $y$, and $z$ on three perpendicular axes, and the forces along those directions, such that these laws are true. These must be measured from some origin, but where do we put the origin? All that Newton would tell us at first is that there is some place that we can measure from, perhaps the center of the universe, such that these laws are correct. But we can show immediately that we can never find the center, because if we use some other origin it would make no difference. In other words, suppose that there are two people—Joe, who has an origin in one place, and Moe, who has a parallel system whose origin is somewhere else (Fig. 11–1). Now when Joe measures the location of the point in space, he finds it at $x$, $y$, and $z$ (we shall usually leave $z$ out because it is too confusing to draw in a picture). Moe, on the other hand, when measuring the same point, will obtain a different $x$ (in order to distinguish it, we will call it $x'$), and in principle a different $y$, although in our example they are numerically equal. So we have \begin{equation} \label{Eq:I:11:2} x'=x-a,\quad y'=y,\quad z'=z. \end{equation}
\begin{equation} \begin{aligned} x'&=x-a,\\[.5ex] y'&=y,\\[.5ex] z'&=z. \end{aligned} \label{Eq:I:11:2} \end{equation} Now in order to complete our analysis we must know what Moe would obtain for the forces. The force is supposed to act along some line, and by the force in the $x$-direction we mean the part of the total which is in the $x$-direction, which is the magnitude of the force times this cosine of its angle with the $x$-axis. Now we see that Moe would use exactly the same projection as Joe would use, so we have a set of equations \begin{equation} \label{Eq:I:11:3} F_{x'}=F_x,\quad F_{y'}=F_y,\quad F_{z'}=F_z. \end{equation}
\begin{equation} \begin{aligned} F_{x'}&=F_x,\\[.75ex] F_{y'}&=F_y,\\[.75ex] F_{z'}&=F_z. \end{aligned} \label{Eq:I:11:3} \end{equation} These would be the relationships between quantities as seen by Joe and Moe. The question is, if Joe knows Newton’s laws, and if Moe tries to write down Newton’s laws, will they also be correct for him? Does it make any difference from which origin we measure the points? In other words, assuming that equations (11.1) are true, and the Eqs. (11.2) and (11.3) give the relationship of the measurements, is it or is it not true that \begin{equation} \begin{alignedat}{4} &(\text{a})\quad&&m(d^2x'&&/dt^2)=F_{x'}&&,\\[.5ex] &(\text{b})\quad&&m(d^2y'&&/dt^2)=F_{y'}&&,\\[.5ex] &(\text{c})\quad&&m(d^2z'&&/dt^2)=F_{z'}&&\;? \end{alignedat} \label{Eq:I:11:4} \end{equation} In order to test these equations we shall differentiate the formula for $x'$ twice. First of all \begin{equation*} \ddt{x'}{t}=\ddt{}{t}(x-a)=\ddt{x}{t}-\ddt{a}{t}. \end{equation*} Now we shall assume that Moe’s origin is fixed (not moving) relative to Joe’s; therefore $a$ is a constant and $da/dt=0$, so we find that \begin{equation*} dx'/dt=dx/dt \end{equation*} and therefore \begin{equation*} d^2x'/dt^2=d^2x/dt^2; \end{equation*} therefore we know that Eq. (11.4a) becomes \begin{equation*} m(d^2x/dt^2)=F_{x'}. \end{equation*} (We also suppose that the masses measured by Joe and Moe are equal.) Thus the acceleration times the mass is the same as the other fellow’s. We have also found the formula for $F_{x'}$, for, substituting from Eq. (11.1), we find that \begin{equation*} F_{x'}=F_x. \end{equation*} Therefore the laws as seen by Moe appear the same; he can write Newton’s laws too, with different coordinates, and they will still be right. That means that there is no unique way to define the origin of the world, because the laws will appear the same, from whatever position they are observed. This is also true: if there is a piece of equipment in one place with a certain kind of machinery in it, the same equipment in another place will behave in the same way. Why? Because one machine, when analyzed by Moe, has exactly the same equations as the other one, analyzed by Joe. Since the equations are the same, the phenomena appear the same. So the proof that an apparatus in a new position behaves the same as it did in the old position is the same as the proof that the equations when displaced in space reproduce themselves. Therefore we say that the laws of physics are symmetrical for translational displacements, symmetrical in the sense that the laws do not change when we make a translation of our coordinates. Of course it is quite obvious intuitively that this is true, but it is interesting and entertaining to discuss the mathematics of it. |
|
1 | 11 | Vectors | 3 | Rotations | The above is the first of a series of ever more complicated propositions concerning the symmetry of a physical law. The next proposition is that it should make no difference in which direction we choose the axes. In other words, if we build a piece of equipment in some place and watch it operate, and nearby we build the same kind of apparatus but put it up on an angle, will it operate in the same way? Obviously it will not if it is a Grandfather clock, for example! If a pendulum clock stands upright, it works fine, but if it is tilted the pendulum falls against the side of the case and nothing happens. The theorem is then false in the case of the pendulum clock, unless we include the earth, which is pulling on the pendulum. Therefore we can make a prediction about pendulum clocks if we believe in the symmetry of physical law for rotation: something else is involved in the operation of a pendulum clock besides the machinery of the clock, something outside it that we should look for. We may also predict that pendulum clocks will not work the same way when located in different places relative to this mysterious source of asymmetry, perhaps the earth. Indeed, we know that a pendulum clock up in an artificial satellite, for example, would not tick either, because there is no effective force, and on Mars it would go at a different rate. Pendulum clocks do involve something more than just the machinery inside, they involve something on the outside. Once we recognize this factor, we see that we must turn the earth along with the apparatus. Of course we do not have to worry about that, it is easy to do; one simply waits a moment or two and the earth turns; then the pendulum clock ticks again in the new position the same as it did before. While we are rotating in space our angles are always changing, absolutely; this change does not seem to bother us very much, for in the new position we seem to be in the same condition as in the old. This has a certain tendency to confuse one, because it is true that in the new turned position the laws are the same as in the unturned position, but it is not true that as we turn a thing it follows the same laws as it does when we are not turning it. If we perform sufficiently delicate experiments, we can tell that the earth is rotating, but not that it had rotated. In other words, we cannot locate its angular position, but we can tell that it is changing. Now we may discuss the effects of angular orientation upon physical laws. Let us find out whether the same game with Joe and Moe works again. This time, to avoid needless complication, we shall suppose that Joe and Moe use the same origin (we have already shown that the axes can be moved by translation to another place). Assume that Moe’s axes have rotated relative to Joe’s by an angle $\theta$. The two coordinate systems are shown in Fig. 11–2, which is restricted to two dimensions. Consider any point $P$ having coordinates $(x,y)$ in Joe’s system and $(x',y')$ in Moe’s system. We shall begin, as in the previous case, by expressing the coordinates $x'$ and $y'$ in terms of $x$, $y$, and $\theta$. To do so, we first drop perpendiculars from $P$ to all four axes and draw $AB$ perpendicular to $PQ$. Inspection of the figure shows that $x'$ can be written as the sum of two lengths along the $x'$-axis, and $y'$ as the difference of two lengths along $AB$. All these lengths are expressed in terms of $x$, $y$, and $\theta$ in equations (11.5), to which we have added an equation for the third dimension. \begin{equation} \begin{alignedat}{4} &x'&&=x&&\cos\theta+y&&\sin\theta,\\ &y'&&=y&&\cos\theta-x&&\sin\theta,\\ &z'&&=z&&. \end{alignedat} \label{Eq:I:11:5} \end{equation} The next step is to analyze the relationship of forces as seen by the two observers, following the same general method as before. Let us assume that a force $\FLPF$, which has already been analyzed as having components $F_x$ and $F_y$ (as seen by Joe), is acting on a particle of mass $m$, located at point $P$ in Fig. 11–2. For simplicity, let us move both sets of axes so that the origin is at $P$, as shown in Fig. 11–3. Moe sees the components of $\FLPF$ along his axes as $F_{x'}$ and $F_{y'}$. $F_x$ has components along both the $x'$- and $y'$-axes, and $F_y$ likewise has components along both these axes. To express $F_{x'}$ in terms of $F_x$ and $F_y$, we sum these components along the $x'$-axis, and in a like manner we can express $F_{y'}$ in terms of $F_x$ and $F_y$. The results are \begin{equation} \begin{alignedat}{4} &F_{x'}&&=F_x&&\cos\theta+F_y&&\sin\theta,\\ &F_{y'}&&=F_y&&\cos\theta-F_x&&\sin\theta,\\ &F_{z'}&&=F_z&&. \end{alignedat} \label{Eq:I:11:6} \end{equation} It is interesting to note an accident of sorts, which is of extreme importance: the formulas (11.5) and (11.6), for coordinates of $P$ and components of $\FLPF$, respectively, are of identical form. As before, Newton’s laws are assumed to be true in Joe’s system, and are expressed by equations (11.1). The question, again, is whether Moe can apply Newton’s laws—will the results be correct for his system of rotated axes? In other words, if we assume that Eqs. (11.5) and (11.6) give the relationship of the measurements, is it true or not true that \begin{equation} \begin{alignedat}{2} &m(d^2x'&&/dt^2)=F_{x'},\\ &m(d^2y'&&/dt^2)=F_{y'},\\ &m(d^2z'&&/dt^2)=F_{z'}? \end{alignedat} \label{Eq:I:11:7} \end{equation} To test these equations, we calculate the left and right sides independently, and compare the results. To calculate the left sides, we multiply equations (11.5) by $m$, and differentiate twice with respect to time, assuming the angle $\theta$ to be constant. This gives \begin{alignat}{4} &m(d^2x'&&/dt^2)=m(d^2x&&/dt^2)\cos\theta+m(d^2y&&/dt^2)\sin\theta,\notag\\ &m(d^2y'&&/dt^2)=m(d^2y&&/dt^2)\cos\theta-m(d^2x&&/dt^2)\sin\theta,\notag\\ \label{Eq:I:11:8} &m(d^2z'&&/dt^2)=m(d^2z&&/dt^2). \end{alignat}
\begin{equation} \begin{aligned} m(d^2&x'/dt^2)=\\ &\;m(d^2x/dt^2)\cos\theta+m(d^2y/dt^2)\sin\theta,\\[1ex] m(d^2&y'/dt^2)=\\ &\;m(d^2y/dt^2)\cos\theta-m(d^2x/dt^2)\sin\theta,\\[1ex] m(d^2&z'/dt^2)=\;m(d^2z/dt^2). \end{aligned} \label{Eq:I:11:8} \end{equation} We calculate the right sides of equations (11.7) by substituting equations (11.1) into equations (11.6). This gives \begin{alignat}{4} &F_{x'}&&=m(d^2x&&/dt^2)\cos\theta+m(d^2y&&/dt^2)\sin\theta,\notag\\ &F_{y'}&&=m(d^2y&&/dt^2)\cos\theta-m(d^2x&&/dt^2)\sin\theta,\notag\\ \label{Eq:I:11:9} &F_{z'}&&=m(d^2z&&/dt^2). \end{alignat} Behold! The right sides of Eqs. (11.8) and (11.9) are identical, so we conclude that if Newton’s laws are correct on one set of axes, they are also valid on any other set of axes. This result, which has now been established for both translation and rotation of axes, has certain consequences: first, no one can claim his particular axes are unique, but of course they can be more convenient for certain particular problems. For example, it is handy to have gravity along one axis, but this is not physically necessary. Second, it means that any piece of equipment which is completely self-contained, with all the force-generating equipment completely inside the apparatus, would work the same when turned at an angle. |
|
1 | 11 | Vectors | 4 | Vectors | Not only Newton’s laws, but also the other laws of physics, so far as we know today, have the two properties which we call invariance (or symmetry) under translation of axes and rotation of axes. These properties are so important that a mathematical technique has been developed to take advantage of them in writing and using physical laws. The foregoing analysis involved considerable tedious mathematical work. To reduce the details to a minimum in the analysis of such questions, a very powerful mathematical machinery has been devised. This system, called vector analysis, supplies the title of this chapter; strictly speaking, however, this is a chapter on the symmetry of physical laws. By the methods of the preceding analysis we were able to do everything required for obtaining the results that we sought, but in practice we should like to do things more easily and rapidly, so we employ the vector technique. We began by noting some characteristics of two kinds of quantities that are important in physics. (Actually there are more than two, but let us start out with two.) One of them, like the number of potatoes in a sack, we call an ordinary quantity, or an undirected quantity, or a scalar. Temperature is an example of such a quantity. Other quantities that are important in physics do have direction, for instance velocity: we have to keep track of which way a body is going, not just its speed. Momentum and force also have direction, as does displacement: when someone steps from one place to another in space, we can keep track of how far he went, but if we wish also to know where he went, we have to specify a direction. All quantities that have a direction, like a step in space, are called vectors. A vector is three numbers. In order to represent a step in space, say from the origin to some particular point $P$ whose location is $(x,y,z)$, we really need three numbers, but we are going to invent a single mathematical symbol, $\FLPr$, which is unlike any other mathematical symbols we have so far used.1 It is not a single number, it represents three numbers: $x$, $y$, and $z$. It means three numbers, but not really only those three numbers, because if we were to use a different coordinate system, the three numbers would be changed to $x'$, $y'$, and $z'$. However, we want to keep our mathematics simple and so we are going to use the same mark to represent the three numbers $(x,y,z)$ and the three numbers $(x',y',z')$. That is, we use the same mark to represent the first set of three numbers for one coordinate system, but the second set of three numbers if we are using the other coordinate system. This has the advantage that when we change the coordinate system, we do not have to change the letters of our equations. If we write an equation in terms of $x,y,z$, and then use another system, we have to change to $x',y',z'$, but we shall just write $\FLPr$, with the convention that it represents $(x,y,z)$ if we use one set of axes, or $(x',y',z')$ if we use another set of axes, and so on. The three numbers which describe the quantity in a given coordinate system are called the components of the vector in the direction of the coordinate axes of that system. That is, we use the same symbol for the three letters that correspond to the same object, as seen from different axes. The very fact that we can say “the same object” implies a physical intuition about the reality of a step in space, that is independent of the components in terms of which we measure it. So the symbol $\FLPr$ will represent the same thing no matter how we turn the axes. Now suppose there is another directed physical quantity, any other quantity, which also has three numbers associated with it, like force, and these three numbers change to three other numbers by a certain mathematical rule, if we change the axes. It must be the same rule that changes $(x,y,z)$ into $(x',y',z')$. In other words, any physical quantity associated with three numbers which transform as do the components of a step in space is a vector. An equation like \begin{equation*} \FLPF=\FLPr \end{equation*} would thus be true in any coordinate system if it were true in one. This equation, of course, stands for the three equations \begin{equation*} F_x=x,\quad F_y=y,\quad F_z=z, \end{equation*} or, alternatively, for \begin{equation*} F_{x'}=x',\quad F_{y'}=y',\quad F_{z'}=z'. \end{equation*} The fact that a physical relationship can be expressed as a vector equation assures us the relationship is unchanged by a mere rotation of the coordinate system. That is the reason why vectors are so useful in physics. Now let us examine some of the properties of vectors. As examples of vectors we may mention velocity, momentum, force, and acceleration. For many purposes it is convenient to represent a vector quantity by an arrow that indicates the direction in which it is acting. Why can we represent force, say, by an arrow? Because it has the same mathematical transformation properties as a “step in space.” We thus represent it in a diagram as if it were a step, using a scale such that one unit of force, or one newton, corresponds to a certain convenient length. Once we have done this, all forces can be represented as lengths, because an equation like \begin{equation*} \FLPF=k\FLPr, \end{equation*} where $k$ is some constant, is a perfectly legitimate equation. Thus we can always represent forces by lines, which is very convenient, because once we have drawn the line we no longer need the axes. Of course, we can quickly calculate the three components as they change upon turning the axes, because that is just a geometric problem. |
|
1 | 11 | Vectors | 5 | Vector algebra | Now we must describe the laws, or rules, for combining vectors in various ways. The first such combination is the addition of two vectors: suppose that $\FLPa$ is a vector which in some particular coordinate system has the three components $(a_x,a_y,a_z)$, and that $\FLPb$ is another vector which has the three components $(b_x,b_y,b_z)$. Now let us invent three new numbers $(a_x+b_x,a_y+b_y,a_z+b_z)$. Do these form a vector? “Well,” we might say, “they are three numbers, and every three numbers form a vector.” No, not every three numbers form a vector! In order for it to be a vector, not only must there be three numbers, but these must be associated with a coordinate system in such a way that if we turn the coordinate system, the three numbers “revolve” on each other, get “mixed up” in each other, by the precise laws we have already described. So the question is, if we now rotate the coordinate system so that $(a_x,a_y,a_z)$ become $(a_{x'},a_{y'},a_{z'})$ and $(b_x,b_y,b_z)$ become $(b_{x'},b_{y'},b_{z'})$, what do $(a_x+b_x,a_y+b_y,a_z+b_z)$ become? Do they become $(a_{x'}+b_{x'}, a_{y'}+b_{y'}, a_{z'}+b_{z'})$ or not? The answer is, of course, yes, because the prototype transformations of Eq. (11.5) constitute what we call a linear transformation. If we apply those transformations to $a_x$ and $b_x$ to get $a_{x'} + b_{x'}$, we find that the transformed $a_x + b_x$ is indeed the same as $a_{x'} + b_{x'}$. When $\FLPa$ and $\FLPb$ are “added together” in this sense, they will form a vector which we may call $\FLPc$. We would write this as \begin{equation*} \FLPc = \FLPa + \FLPb. \end{equation*} Now $\FLPc$ has the interesting property \begin{equation*} \FLPc = \FLPb + \FLPa, \end{equation*} as we can immediately see from its components. Thus also, \begin{equation*} \FLPa + (\FLPb + \FLPc) = (\FLPa + \FLPb) + \FLPc. \end{equation*} We can add vectors in any order. What is the geometric significance of $\FLPa + \FLPb$? Suppose that $\FLPa$ and $\FLPb$ were represented by lines on a piece of paper, what would $\FLPc$ look like? This is shown in Fig. 11–4. We see that we can add the components of $\FLPb$ to those of $\FLPa$ most conveniently if we place the rectangle representing the components of $\FLPb$ next to that representing the components of $\FLPa$ in the manner indicated. Since $\FLPb$ just “fits” into its rectangle, as does $\FLPa$ into its rectangle, this is the same as putting the “tail” of $\FLPb$ on the “head” of $\FLPa$, the arrow from the “tail” of $\FLPa$ to the “head” of $\FLPb$ being the vector $\FLPc$. Of course, if we added $\FLPa$ to $\FLPb$ the other way around, we would put the “tail” of $\FLPa$ on the “head” of $\FLPb$, and by the geometrical properties of parallelograms we would get the same result for $\FLPc$. Note that vectors can be added in this way without reference to any coordinate axes. Suppose we multiply a vector by a number $\alpha$, what does this mean? We define it to mean a new vector whose components are $\alpha a_x$, $\alpha a_y$, and $\alpha a_z$. We leave it as a problem for the student to prove that it is a vector. Now let us consider vector subtraction. We may define subtraction in the same way as addition, but instead of adding, we subtract the components. Or we might define subtraction by defining a negative vector, $-\FLPb = -1 \FLPb$, and then we would add the components. It comes to the same thing. The result is shown in Fig. 11–5. This figure shows $\FLPd =$ $\FLPa - \FLPb =$ $\FLPa + (-\FLPb)$; we also note that the difference $\FLPa - \FLPb$ can be found very easily from $\FLPa$ and $\FLPb$ by using the equivalent relation $\FLPa = \FLPb + \FLPd$. Thus the difference is even easier to find than the sum: we just draw the vector from $\FLPb$ to $\FLPa$, to get $\FLPa - \FLPb$! Next we discuss velocity. Why is velocity a vector? If position is given by the three coordinates $(x,y,z)$, what is the velocity? The velocity is given by $dx/dt$, $dy/dt$, and $dz/dt$. Is that a vector, or not? We can find out by differentiating the expressions in Eq. (11.5) to find out whether $dx'/dt$ transforms in the right way. We see that the components $dx/dt$ and $dy/dt$ do transform according to the same law as $x$ and $y$, and therefore the time derivative is a vector. So the velocity is a vector. We can write the velocity in an interesting way as \begin{equation*} \FLPv=d\FLPr/dt. \end{equation*} What the velocity is, and why it is a vector, can also be understood more pictorially: How far does a particle move in a short time $\Delta t$? Answer: $\Delta\FLPr$, so if a particle is “here” at one instant and “there” at another instant, then the vector difference of the positions $\Delta\FLPr = \FLPr_2 - \FLPr_1$, which is in the direction of motion shown in Fig. 11–6, divided by the time interval $\Delta t = t_2 - t_1$, is the “average velocity” vector. In other words, by vector velocity we mean the limit, as $\Delta t$ goes to $0$, of the difference between the radius vectors at the time $t + \Delta t$ and the time $t$, divided by $\Delta t$: \begin{equation} \label{Eq:I:11:10} \FLPv=\lim_{\Delta t\to0}(\Delta\FLPr/\Delta t)=d\FLPr/dt. \end{equation} Thus velocity is a vector because it is the difference of two vectors. It is also the right definition of velocity because its components are $dx/dt$, $dy/dt$, and $dz/dt$. In fact, we see from this argument that if we differentiate any vector with respect to time we produce a new vector. So we have several ways of producing new vectors: (1) multiply by a constant, (2) differentiate with respect to time, (3) add or subtract two vectors. |
|
1 | 11 | Vectors | 6 | Newton’s laws in vector notation | In order to write Newton’s laws in vector form, we have to go just one step further, and define the acceleration vector. This is the time derivative of the velocity vector, and it is easy to demonstrate that its components are the second derivatives of $x$, $y$, and $z$ with respect to $t$: \begin{equation} \label{Eq:I:11:11} \FLPa=\ddt{\FLPv}{t} = \biggl(\ddt{}{t}\biggr)\biggl(\ddt{\FLPr}{t}\biggr) = \frac{d^2\FLPr}{dt^2}, \end{equation} \begin{equation} \label{Eq:I:11:12} a_x = \ddt{v_x}{t} = \frac{d^2x}{dt^2},\quad a_y = \ddt{v_y}{t} = \frac{d^2y}{dt^2},\quad a_z = \ddt{v_z}{t} = \frac{d^2z}{dt^2}. \end{equation}
\begin{equation} \begin{alignedat}{2} a_x &= \ddt{v_x}{t} &= \frac{d^2x}{dt^2},\\[1ex] a_y &= \ddt{v_y}{t} &= \frac{d^2y}{dt^2},\\[1ex] a_z &= \ddt{v_z}{t} &= \frac{d^2z}{dt^2}. \end{alignedat} \label{Eq:I:11:12} \end{equation} With this definition, then, Newton’s laws can be written in this way: \begin{equation} \label{Eq:I:11:13} m\FLPa = \FLPF \end{equation} or \begin{equation} \label{Eq:I:11:14} m(d^2\FLPr/dt^2) = \FLPF. \end{equation} Now the problem of proving the invariance of Newton’s laws under rotation of coordinates is this: prove that $\FLPa$ is a vector; this we have just done. Prove that $\FLPF$ is a vector; we suppose it is. So if force is a vector, then, since we know acceleration is a vector, Eq. (11.13) will look the same in any coordinate system. Writing it in a form which does not explicitly contain $x$’s, $y$’s, and $z$’s has the advantage that from now on we need not write three laws every time we write Newton’s equations or other laws of physics. We write what looks like one law, but really, of course, it is the three laws for any particular set of axes, because any vector equation involves the statement that each of the components is equal. The fact that the acceleration is the rate of change of the vector velocity helps us to calculate the acceleration in some rather complicated circumstances. Suppose, for instance, that a particle is moving on some complicated curve (Fig. 11–7) and that, at a given instant $t_1$, it had a certain velocity $\FLPv_1$, but that when we go to another instant $t_2$ a little later, it has a different velocity $\FLPv_2$. What is the acceleration? Answer: Acceleration is the difference in the velocity divided by the small time interval, so we need the difference of the two velocities. How do we get the difference of the velocities? To subtract two vectors, we put the vector across the ends of $\FLPv_2$ and $\FLPv_1$; that is, we draw $\Delta\FLPv$ as the difference of the two vectors, right? No! That only works when the tails of the vectors are in the same place! It has no meaning if we move the vector somewhere else and then draw a line across, so watch out! We have to draw a new diagram to subtract the vectors. In Fig. 11–8, $\FLPv_1$ and $\FLPv_2$ are both drawn parallel and equal to their counterparts in Fig. 11–7, and now we can discuss the acceleration. Of course the acceleration is simply $\Delta\FLPv/\Delta t$. It is interesting to note that we can compose the velocity difference out of two parts; we can think of acceleration as having two components, $\Delta\FLPv_\parallel$, in the direction tangent to the path and $\Delta\FLPv_\perp$ at right angles to the path, as indicated in Fig. 11–8. The acceleration tangent to the path is, of course, just the change in the length of the vector, i.e., the change in the speed $v$: \begin{equation} \label{Eq:I:11:15} a_\parallel=dv/dt. \end{equation} The other component of acceleration, at right angles to the curve, is easy to calculate, using Figs. 11–7 and 11–8. In the short time $\Delta t$ let the change in angle between $\FLPv_1$ and $\FLPv_2$ be the small angle $\Delta\theta$. If the magnitude of the velocity is called $v$, then of course \begin{equation*} \Delta v_\perp=v\,\Delta\theta \end{equation*} and the acceleration $a$ will be \begin{equation*} a_\perp=v\,(\Delta\theta/\Delta t). \end{equation*} Now we need to know $\Delta\theta/\Delta t$, which can be found this way: If, at the given moment, the curve is approximated as a circle of a certain radius $R$, then in a time $\Delta t$ the distance $s$ is, of course, $v\,\Delta t$, where $v$ is the speed. \begin{equation} \Delta\theta=(v\,\Delta t)/R,\quad \text{or}\quad \Delta\theta/\Delta t=v/R.\notag \end{equation} Therefore, we find \begin{equation} \label{Eq:I:11:16} a_\perp=v^2/R, \end{equation} as we have seen before. |
|
1 | 11 | Vectors | 7 | Scalar product of vectors | Now let us examine a little further the properties of vectors. It is easy to see that the length of a step in space would be the same in any coordinate system. That is, if a particular step $\FLPr$ is represented by $x,y,z$, in one coordinate system, and by $x',y',z'$ in another coordinate system, surely the distance $r=\abs{\FLPr}$ would be the same in both. Now \begin{equation*} r=\sqrt{x^2+y^2+z^2} \end{equation*} and also \begin{equation*} r'=\sqrt{x'^2+y'^2+z'^2}. \end{equation*} So what we wish to verify is that these two quantities are equal. It is much more convenient not to bother to take the square root, so let us talk about the square of the distance; that is, let us find out whether \begin{equation} \label{Eq:I:11:17} x^2+y^2+z^2=x'^2+y'^2+z'^2. \end{equation} It had better be—and if we substitute Eq. (11.5) we do indeed find that it is. So we see that there are other kinds of equations which are true for any two coordinate systems. Something new is involved. We can produce a new quantity, a function of $x$, $y$, and $z$, called a scalar function, a quantity which has no direction but which is the same in both systems. Out of a vector we can make a scalar. We have to find a general rule for that. It is clear what the rule is for the case just considered: add the squares of the components. Let us now define a new thing, which we call $\FLPa\cdot\FLPa$. This is not a vector, but a scalar; it is a number that is the same in all coordinate systems, and it is defined to be the sum of the squares of the three components of the vector: \begin{equation} \label{Eq:I:11:18} \FLPa\cdot\FLPa=a_x^2+a_y^2+a_z^2. \end{equation} Now you say, “But with what axes?” It does not depend on the axes, the answer is the same in every set of axes. So we have a new kind of quantity, a new invariant or scalar produced by one vector “squared.” If we now define the following quantity for any two vectors $\FLPa$ and $\FLPb$: \begin{equation} \label{Eq:I:11:19} \FLPa\cdot\FLPb=a_xb_x+a_yb_y+a_zb_z, \end{equation} we find that this quantity, calculated in the primed and unprimed systems, also stays the same. To prove it we note that it is true of $\FLPa\cdot\FLPa$, $\FLPb\cdot\FLPb$, and $\FLPc\cdot\FLPc$, where $\FLPc = \FLPa + \FLPb$. Therefore the sum of the squares $(a_x + b_x)^2 + (a_y + b_y)^2 + (a_z + b_z)^2$ will be invariant: \begin{align} (a_x &+ b_x)^2 + (a_y + b_y)^2 +\;(a_z + b_z)^2 = \notag\\[1ex] \label{Eq:I:11:20} &(a_{x'} + b_{x'})^2 + (a_{y'} + b_{y'})^2 + (a_{z'} + b_{z'})^2. \end{align} If both sides of this equation are expanded, there will be cross products of just the type appearing in Eq. (11.19), as well as the sums of squares of the components of $\FLPa$ and $\FLPb$. The invariance of terms of the form of Eq. (11.18) then leaves the cross product terms (11.19) invariant also. The quantity $\FLPa\cdot\FLPb$ is called the scalar product of two vectors, $\FLPa$ and $\FLPb$, and it has many interesting and useful properties. For instance, it is easily proved that \begin{equation} \label{Eq:I:11:21} \FLPa\cdot(\FLPb+\FLPc)=\FLPa\cdot\FLPb+\FLPa\cdot\FLPc. \end{equation} Also, there is a simple geometrical way to calculate $\FLPa\cdot\FLPb$, without having to calculate the components of $\FLPa$ and $\FLPb$: $\FLPa\cdot\FLPb$ is the product of the length of $\FLPa$ and the length of $\FLPb$ times the cosine of the angle between them. Why? Suppose that we choose a special coordinate system in which the $x$-axis lies along $\FLPa$; in those circumstances, the only component of $\FLPa$ that will be there is $a_x$, which is of course the whole length of $\FLPa$. Thus Eq. (11.19) reduces to $\FLPa\cdot\FLPb = a_xb_x$ for this case, and this is the length of $\FLPa$ times the component of $\FLPb$ in the direction of $\FLPa$, that is, $b\cos\theta$: \begin{equation*} \FLPa\cdot\FLPb=ab\cos\theta. \end{equation*} Therefore, in that special coordinate system, we have proved that $\FLPa\cdot\FLPb$ is the length of $\FLPa$ times the length of $\FLPb$ times $\cos\theta$. But if it is true in one coordinate system, it is true in all, because $\FLPa\cdot\FLPb$ is independent of the coordinate system; that is our argument. What good is the dot product? Are there any cases in physics where we need it? Yes, we need it all the time. For instance, in Chapter 4 the kinetic energy was called $\tfrac{1}{2}mv^2$, but if the object is moving in space it should be the velocity squared in the $x$-direction, the $y$-direction, and the $z$-direction, and so the formula for kinetic energy according to vector analysis is \begin{equation} \label{Eq:I:11:22} \text{K.E.}=\tfrac{1}{2}m(\FLPv\cdot\FLPv)=\tfrac{1}{2}m (v_x^2+v_y^2+v_z^2). \end{equation} Energy does not have direction. Momentum has direction; it is a vector, and it is the mass times the velocity vector. Another example of a dot product is the work done by a force when something is pushed from one place to the other. We have not yet defined work, but it is equivalent to the energy change, the weights lifted, when a force $\FLPF$ acts through a distance $\FLPs$: \begin{equation} \label{Eq:I:11:23} \text{Work}=\FLPF\cdot\FLPs. \end{equation} It is sometimes very convenient to talk about the component of a vector in a certain direction (say the vertical direction because that is the direction of gravity). For such purposes, it is useful to invent what we call a unit vector in the direction that we want to study. By a unit vector we mean one whose dot product with itself is equal to unity. Let us call this unit vector $\FLPi$; then $\FLPi\cdot\FLPi = 1$. Then, if we want the component of some vector in the direction of $\FLPi$, we see that the dot product $\FLPa\cdot\FLPi$ will be $a\cos\theta$, i.e., the component of $\FLPa$ in the direction of $\FLPi$. This is a nice way to get the component; in fact, it permits us to get all the components and to write a rather amusing formula. Suppose that in a given system of coordinates, $x$, $y$, and $z$, we invent three vectors: $\FLPi$, a unit vector in the direction $x$; $\FLPj$, a unit vector in the direction $y$; and $\FLPk$, a unit vector in the direction $z$. Note first that $\FLPi\cdot\FLPi = 1$. What is $\FLPi\cdot\FLPj$? When two vectors are at right angles, their dot product is zero. Thus \begin{alignat}{6} &\FLPi\cdot\FLPi&&=1\notag\\[1ex] &\FLPi\cdot\FLPj&&=0&\quad &\FLPj\cdot\FLPj&&=1\notag\\[1ex] \label{Eq:I:11:24} &\FLPi\cdot\FLPk&&=0&\quad &\FLPj\cdot\FLPk&&=0&\quad &\FLPk\cdot\FLPk&&=1 \end{alignat} Now with these definitions, any vector whatsoever can be written this way: \begin{equation} \label{Eq:I:11:25} \FLPa=a_x\FLPi+a_y\FLPj+a_z\FLPk. \end{equation} By this means we can go from the components of a vector to the vector itself. This discussion of vectors is by no means complete. However, rather than try to go more deeply into the subject now, we shall first learn to use in physical situations some of the ideas so far discussed. Then, when we have properly mastered this basic material, we shall find it easier to penetrate more deeply into the subject without getting too confused. We shall later find that it is useful to define another kind of product of two vectors, called the vector product, and written as $\FLPa\times\FLPb$. However, we shall undertake a discussion of such matters in a later chapter. |
|
1 | 12 | Characteristics of Force | 1 | What is a force? | Although it is interesting and worthwhile to study the physical laws simply because they help us to understand and to use nature, one ought to stop every once in a while and think, “What do they really mean?” The meaning of any statement is a subject that has interested and troubled philosophers from time immemorial, and the meaning of physical laws is even more interesting, because it is generally believed that these laws represent some kind of real knowledge. The meaning of knowledge is a deep problem in philosophy, and it is always important to ask, “What does it mean?” Let us ask, “What is the meaning of the physical laws of Newton, which we write as $F=ma$? What is the meaning of force, mass, and acceleration?” Well, we can intuitively sense the meaning of mass, and we can define acceleration if we know the meaning of position and time. We shall not discuss those meanings, but shall concentrate on the new concept of force. The answer is equally simple: “If a body is accelerating, then there is a force on it.” That is what Newton’s laws say, so the most precise and beautiful definition of force imaginable might simply be to say that force is the mass of an object times the acceleration. Suppose we have a law which says that the conservation of momentum is valid if the sum of all the external forces is zero; then the question arises, “What does it mean, that the sum of all the external forces is zero?” A pleasant way to define that statement would be: “When the total momentum is a constant, then the sum of the external forces is zero.” There must be something wrong with that, because it is just not saying anything new. If we have discovered a fundamental law, which asserts that the force is equal to the mass times the acceleration, and then define the force to be the mass times the acceleration, we have found out nothing. We could also define force to mean that a moving object with no force acting on it continues to move with constant velocity in a straight line. If we then observe an object not moving in a straight line with a constant velocity, we might say that there is a force on it. Now such things certainly cannot be the content of physics, because they are definitions going in a circle. The Newtonian statement above, however, seems to be a most precise definition of force, and one that appeals to the mathematician; nevertheless, it is completely useless, because no prediction whatsoever can be made from a definition. One might sit in an armchair all day long and define words at will, but to find out what happens when two balls push against each other, or when a weight is hung on a spring, is another matter altogether, because the way the bodies behave is something completely outside any choice of definitions. For example, if we were to choose to say that an object left to itself keeps its position and does not move, then when we see something drifting, we could say that must be due to a “gorce”—a gorce is the rate of change of position. Now we have a wonderful new law, everything stands still except when a gorce is acting. You see, that would be analogous to the above definition of force, and it would contain no information. The real content of Newton’s laws is this: that the force is supposed to have some independent properties, in addition to the law $F=ma$; but the specific independent properties that the force has were not completely described by Newton or by anybody else, and therefore the physical law $F=ma$ is an incomplete law. It implies that if we study the mass times the acceleration and call the product the force, i.e., if we study the characteristics of force as a program of interest, then we shall find that forces have some simplicity; the law is a good program for analyzing nature, it is a suggestion that the forces will be simple. Now the first example of such forces was the complete law of gravitation, which was given by Newton, and in stating the law he answered the question, “What is the force?” If there were nothing but gravitation, then the combination of this law and the force law (second law of motion) would be a complete theory, but there is much more than gravitation, and we want to use Newton’s laws in many different situations. Therefore in order to proceed we have to tell something about the properties of force. For example, in dealing with force the tacit assumption is always made that the force is equal to zero unless some physical body is present, that if we find a force that is not equal to zero we also find something in the neighborhood that is a source of the force. This assumption is entirely different from the case of the “gorce” that we introduced above. One of the most important characteristics of force is that it has a material origin, and this is not just a definition. Newton also gave one rule about the force: that the forces between interacting bodies are equal and opposite—action equals reaction; that rule, it turns out, is not exactly true. In fact, the law $F=ma$ is not exactly true; if it were a definition we should have to say that it is always exactly true; but it is not. The student may object, “I do not like this imprecision, I should like to have everything defined exactly; in fact, it says in some books that any science is an exact subject, in which everything is defined.” If you insist upon a precise definition of force, you will never get it! First, because Newton’s Second Law is not exact, and second, because in order to understand physical laws you must understand that they are all some kind of approximation. Any simple idea is approximate; as an illustration, consider an object, … what is an object? Philosophers are always saying, “Well, just take a chair for example.” The moment they say that, you know that they do not know what they are talking about any more. What is a chair? Well, a chair is a certain thing over there … certain?, how certain? The atoms are evaporating from it from time to time—not many atoms, but a few—dirt falls on it and gets dissolved in the paint; so to define a chair precisely, to say exactly which atoms are chair, and which atoms are air, or which atoms are dirt, or which atoms are paint that belongs to the chair is impossible. So the mass of a chair can be defined only approximately. In the same way, to define the mass of a single object is impossible, because there are not any single, left-alone objects in the world—every object is a mixture of a lot of things, so we can deal with it only as a series of approximations and idealizations. The trick is the idealizations. To an excellent approximation of perhaps one part in $10^{10}$, the number of atoms in the chair does not change in a minute, and if we are not too precise we may idealize the chair as a definite thing; in the same way we shall learn about the characteristics of force, in an ideal fashion, if we are not too precise. One may be dissatisfied with the approximate view of nature that physics tries to obtain (the attempt is always to increase the accuracy of the approximation), and may prefer a mathematical definition; but mathematical definitions can never work in the real world. A mathematical definition will be good for mathematics, in which all the logic can be followed out completely, but the physical world is complex, as we have indicated in a number of examples, such as those of the ocean waves and a glass of wine. When we try to isolate pieces of it, to talk about one mass, the wine and the glass, how can we know which is which, when one dissolves in the other? The forces on a single thing already involve approximation, and if we have a system of discourse about the real world, then that system, at least for the present day, must involve approximations of some kind. This system is quite unlike the case of mathematics, in which everything can be defined, and then we do not know what we are talking about. In fact, the glory of mathematics is that we do not have to say what we are talking about. The glory is that the laws, the arguments, and the logic are independent of what “it” is. If we have any other set of objects that obey the same system of axioms as Euclid’s geometry, then if we make new definitions and follow them out with correct logic, all the consequences will be correct, and it makes no difference what the subject was. In nature, however, when we draw a line or establish a line by using a light beam and a theodolite, as we do in surveying, are we measuring a line in the sense of Euclid? No, we are making an approximation; the cross hair has some width, but a geometrical line has no width, and so, whether Euclidean geometry can be used for surveying or not is a physical question, not a mathematical question. However, from an experimental standpoint, not a mathematical standpoint, we need to know whether the laws of Euclid apply to the kind of geometry that we use in measuring land; so we make a hypothesis that it does, and it works pretty well; but it is not precise, because our surveying lines are not really geometrical lines. Whether or not those lines of Euclid, which are really abstract, apply to the lines of experience is a question for experience; it is not a question that can be answered by sheer reason. In the same way, we cannot just call $F=ma$ a definition, deduce everything purely mathematically, and make mechanics a mathematical theory, when mechanics is a description of nature. By establishing suitable postulates it is always possible to make a system of mathematics, just as Euclid did, but we cannot make a mathematics of the world, because sooner or later we have to find out whether the axioms are valid for the objects of nature. Thus we immediately get involved with these complicated and “dirty” objects of nature, but with approximations ever increasing in accuracy. |
|
1 | 12 | Characteristics of Force | 2 | Friction | The foregoing considerations show that a true understanding of Newton’s laws requires a discussion of forces, and it is the purpose of this chapter to introduce such a discussion, as a kind of completion of Newton’s laws. We have already studied the definitions of acceleration and related ideas, but now we have to study the properties of force, and this chapter, unlike the previous chapters, will not be very precise, because forces are quite complicated. To begin with a particular force, let us consider the drag on an airplane flying through the air. What is the law for that force? (Surely there is a law for every force, we must have a law!) One can hardly think that the law for that force will be simple. Try to imagine what makes a drag on an airplane flying through the air—the air rushing over the wings, the swirling in the back, the changes going on around the fuselage, and many other complications, and you see that there is not going to be a simple law. On the other hand, it is a remarkable fact that the drag force on an airplane is approximately a constant times the square of the velocity, or $F\approx cv^2$. Now what is the status of such a law, is it analogous to $F=ma$? Not at all, because in the first place this law is an empirical thing that is obtained roughly by tests in a wind tunnel. You say, “Well $F=ma$ might be empirical too.” That is not the reason that there is a difference. The difference is not that it is empirical, but that, as we understand nature, this law is the result of an enormous complexity of events and is not, fundamentally, a simple thing. If we continue to study it more and more, measuring more and more accurately, the law will continue to become more complicated, not less. In other words, as we study this law of the drag on an airplane more and more closely, we find out that it is “falser” and “falser,” and the more deeply we study it, and the more accurately we measure, the more complicated the truth becomes; so in that sense we consider it not to result from a simple, fundamental process, which agrees with our original surmise. For example, if the velocity is extremely low, so low that an ordinary airplane is not flying, as when the airplane is dragged slowly through the air, then the law changes, and the drag friction depends more nearly linearly on the velocity. To take another example, the frictional drag on a ball or a bubble or anything that is moving slowly through a viscous liquid like honey, is proportional to the velocity, but for motion so fast that the fluid swirls around (honey does not but water and air do) then the drag becomes more nearly proportional to the square of the velocity ($F=cv^2$), and if the velocity continues to increase, then even this law begins to fail. People who say, “Well the coefficient changes slightly,” are dodging the issue. Second, there are other great complications: can this force on the airplane be divided or analyzed as a force on the wings, a force on the front, and so on? Indeed, this can be done, if we are concerned about the torques here and there, but then we have to get special laws for the force on the wings, and so on. It is an amazing fact that the force on a wing depends upon the other wing: in other words, if we take the airplane apart and put just one wing in the air, then the force is not the same as if the rest of the plane were there. The reason, of course, is that some of the wind that hits the front goes around to the wings and changes the force on the wings. It seems a miracle that there is such a simple, rough, empirical law that can be used in the design of airplanes, but this law is not in the same class as the basic laws of physics, and further study of it will only make it more and more complicated. A study of how the coefficient $c$ depends on the shape of the front of the airplane is, to put it mildly, frustrating. There just is no simple law for determining the coefficient in terms of the shape of the airplane. In contrast, the law of gravitation is simple, and further study only indicates its greater simplicity. We have just discussed two cases of friction, resulting from fast movement in air and slow movement in honey. There is another kind of friction, called dry friction or sliding friction, which occurs when one solid body slides on another. In this case a force is needed to maintain motion. This is called a frictional force, and its origin, also, is a very complicated matter. Both surfaces of contact are irregular, on an atomic level. There are many points of contact where the atoms seem to cling together, and then, as the sliding body is pulled along, the atoms snap apart and vibration ensues; something like that has to happen. Formerly the mechanism of this friction was thought to be very simple, that the surfaces were merely full of irregularities and the friction originated in lifting the slider over the bumps; but this cannot be, for there is no loss of energy in that process, whereas power is in fact consumed. The mechanism of power loss is that as the slider snaps over the bumps, the bumps deform and then generate waves and atomic motions and, after a while, heat, in the two bodies. Now it is very remarkable that again, empirically, this friction can be described approximately by a simple law. This law is that the force needed to overcome friction and to drag one object over another depends upon the normal force (i.e., perpendicular to the surface) between the two surfaces that are in contact. Actually, to a fairly good approximation, the frictional force is proportional to this normal force, and has a more or less constant coefficient; that is, \begin{equation} \label{Eq:I:12:1} F=\mu N, \end{equation} where $\mu$ is called the coefficient of friction (Fig. 12–1). Although this coefficient is not exactly constant, the formula is a good empirical rule for judging approximately the amount of force that will be needed in certain practical or engineering circumstances. If the normal force or the speed of motion gets too big, the law fails because of the excessive heat generated. It is important to realize that each of these empirical laws has its limitations, beyond which it does not really work. That the formula $F=\mu N$ is approximately correct can be demonstrated by a simple experiment. We set up a plane, inclined at a small angle $\theta$, and place a block of weight $W$ on the plane. We then tilt the plane at a steeper angle, until the block just begins to slide from its own weight. The component of the weight downward along the plane is $W\sin\theta$, and this must equal the frictional force $F$ when the block is sliding uniformly. The component of the weight normal to the plane is $W\cos\theta$, and this is the normal force $N$. With these values, the formula becomes $W\sin\theta=\mu W\cos\theta$, from which we get $\mu=$ $\sin\theta/\cos\theta=$ $\tan\theta$. If this law were exactly true, an object would start to slide at some definite inclination. If the same block is loaded by putting extra weight on it, then, although $W$ is increased, all the forces in the formula are increased in the same proportion, and $W$ cancels out. If $\mu$ stays constant, the loaded block will slide again at the same slope. When the angle $\theta$ is determined by trial with the original weight, it is found that with the greater weight the block will slide at about the same angle. This will be true even when one weight is many times as great as the other, and so we conclude that the coefficient of friction is independent of the weight. In performing this experiment it is noticeable that when the plane is tilted at about the correct angle $\theta$, the block does not slide steadily but in a halting fashion. At one place it may stop, at another it may move with acceleration. This behavior indicates that the coefficient of friction is only roughly a constant, and varies from place to place along the plane. The same erratic behavior is observed whether the block is loaded or not. Such variations are caused by different degrees of smoothness or hardness of the plane, and perhaps dirt, oxides, or other foreign matter. The tables that list purported values of $\mu$ for “steel on steel,” “copper on copper,” and the like, are all false, because they ignore the factors mentioned above, which really determine $\mu$. The friction is never due to “copper on copper,” etc., but to the impurities clinging to the copper. In experiments of the type described above, the friction is nearly independent of the velocity. Many people believe that the friction to be overcome to get something started (static friction) exceeds the force required to keep it sliding (sliding friction), but with dry metals it is very hard to show any difference. The opinion probably arises from experiences where small bits of oil or lubricant are present, or where blocks, for example, are supported by springs or other flexible supports so that they appear to bind. It is quite difficult to do accurate quantitative experiments in friction, and the laws of friction are still not analyzed very well, in spite of the enormous engineering value of an accurate analysis. Although the law $F=\mu N$ is fairly accurate once the surfaces are standardized, the reason for this form of the law is not really understood. To show that the coefficient $\mu$ is nearly independent of velocity requires some delicate experimentation, because the apparent friction is much reduced if the lower surface vibrates very fast. When the experiment is done at very high speed, care must be taken that the objects do not vibrate relative to one another, since apparent decreases of the friction at high speed are often due to vibrations. At any rate, this friction law is another of those semiempirical laws that are not thoroughly understood, and in view of all the work that has been done it is surprising that more understanding of this phenomenon has not come about. At the present time, in fact, it is impossible even to estimate the coefficient of friction between two substances. It was pointed out above that attempts to measure $\mu$ by sliding pure substances such as copper on copper will lead to spurious results, because the surfaces in contact are not pure copper, but are mixtures of oxides and other impurities. If we try to get absolutely pure copper, if we clean and polish the surfaces, outgas the materials in a vacuum, and take every conceivable precaution, we still do not get $\mu$. For if we tilt the apparatus even to a vertical position, the slider will not fall off—the two pieces of copper stick together! The coefficient $\mu$, which is ordinarily less than unity for reasonably hard surfaces, becomes several times unity! The reason for this unexpected behavior is that when the atoms in contact are all of the same kind, there is no way for the atoms to “know” that they are in different pieces of copper. When there are other atoms, in the oxides and greases and more complicated thin surface layers of contaminants in between, the atoms “know” when they are not on the same part. When we consider that it is forces between atoms that hold the copper together as a solid, it should become clear that it is impossible to get the right coefficient of friction for pure metals. The same phenomenon can be observed in a simple home-made experiment with a flat glass plate and a glass tumbler. If the tumbler is placed on the plate and pulled along with a loop of string, it slides fairly well and one can feel the coefficient of friction; it is a little irregular, but it is a coefficient. If we now wet the glass plate and the bottom of the tumbler and pull again, we find that it binds, and if we look closely we shall find scratches, because the water is able to lift the grease and the other contaminants off the surface, and then we really have a glass-to-glass contact; this contact is so good that it holds tight and resists separation so much that the glass is torn apart; that is, it makes scratches. |
|
1 | 12 | Characteristics of Force | 3 | Molecular forces | We shall next discuss the characteristics of molecular forces. These are forces between the atoms, and are the ultimate origin of friction. Molecular forces have never been satisfactorily explained on a basis of classical physics; it takes quantum mechanics to understand them fully. Empirically, however, the force between atoms is illustrated schematically in Fig. 12–2, where the force $F$ between two atoms is plotted as a function of the distance $r$ between them. There are different cases: in the water molecule, for example, the negative charges sit more on the oxygen, and the mean positions of the negative charges and of the positive charges are not at the same point; consequently, another molecule nearby feels a relatively large force, which is called a dipole-dipole force. However, for many systems the charges are very much better balanced, in particular for oxygen gas, which is perfectly symmetrical. In this case, although the minus charges and the plus charges are dispersed over the molecule, the distribution is such that the center of the minus charges and the center of the plus charges coincide. A molecule where the centers do not coincide is called a polar molecule, and charge times the separation between centers is called the dipole moment. A nonpolar molecule is one in which the centers of the charges coincide. For all nonpolar molecules, in which all the electrical forces are neutralized, it nevertheless turns out that the force at very large distances is an attraction and varies inversely as the seventh power of the distance, or $F=k/r^7$, where $k$ is a constant that depends on the molecules. Why this is we shall learn only when we learn quantum mechanics. When there are dipoles the forces are greater. When atoms or molecules get too close they repel with a very large repulsion; that is what keeps us from falling through the floor! These molecular forces can be demonstrated in a fairly direct way: one of these is the friction experiment with a sliding glass tumbler; another is to take two very carefully ground and lapped surfaces which are very accurately flat, so that the surfaces can be brought very close together. An example of such surfaces is the Johansson blocks that are used in machine shops as standards for making accurate length measurements. If one such block is slid over another very carefully and the upper one is lifted, the other one will adhere and also be lifted by the molecular forces, exemplifying the direct attraction between the atoms on one block for the atoms on the other block. Nevertheless these molecular forces of attraction are still not fundamental in the sense that gravitation is fundamental; they are due to the vastly complex interactions of all the electrons and nuclei in one molecule with all the electrons and nuclei in another. Any simple-looking formula we get represents a summation of complications, so we still have not got the fundamental phenomena. Since the molecular forces attract at large distances and repel at short distances, as shown in Fig. 12–2, we can make up solids in which all the atoms are held together by their attractions and held apart by the repulsion that sets in when they are too close together. At a certain distance $d$ (where the graph in Fig. 12–2 crosses the axis) the forces are zero, which means that they are all balanced, so that the molecules stay that distance apart from one another. If the molecules are pushed closer together than the distance $d$ they all show a repulsion, represented by the portion of the graph above the $r$-axis. To push the molecules only slightly closer together requires a great force, because the molecular repulsion rapidly becomes very great at distances less than $d$. If the molecules are pulled slightly apart there is a slight attraction, which increases as the separation increases. If they are pulled sufficiently hard, they will separate permanently—the bond is broken. If the molecules are pushed only a very small distance closer, or pulled only a very small distance farther than $d$, the corresponding distance along the curve of Fig. 12–2 is also very small, and can then be approximated by a straight line. Therefore, in many circumstances, if the displacement is not too great the force is proportional to the displacement. This principle is known as Hooke’s law, or the law of elasticity, which says that the force in a body which tries to restore the body to its original condition when it is distorted is proportional to the distortion. This law, of course, holds true only if the distortion is relatively small; when it gets too large the body will be torn apart or crushed, depending on the kind of distortion. The amount of force for which Hooke’s law is valid depends upon the material; for instance, for dough or putty the force is very small, but for steel it is relatively large. Hooke’s law can be nicely demonstrated with a long coil spring, made of steel and suspended vertically. A suitable weight hung on the lower end of the spring produces a tiny twist throughout the length of the wire, which results in a small vertical deflection in each turn and adds up to a large displacement if there are many turns. If the total elongation produced, say, by a $100$-gram weight, is measured, it is found that additional weights of $100$ grams will each produce an additional elongation that is very nearly equal to the stretch that was measured for the first $100$ grams. This constant ratio of force to displacement begins to change when the spring is overloaded, i.e., Hooke’s law no longer holds. |
|
1 | 12 | Characteristics of Force | 4 | Fundamental forces. Fields | We shall now discuss the only remaining forces that are fundamental. We call them fundamental in the sense that their laws are fundamentally simple. We shall first discuss electrical force. Objects carry electrical charges which consist simply of electrons or protons. If any two bodies are electrically charged, there is an electrical force between them, and if the magnitudes of the charges are $q_1$ and $q_2$, respectively, the force varies inversely as the square of the distance between the charges, or $F=(\text{const}) q_1q_2/r^2$. For unlike charges, this law is like the law of gravitation, but for like charges the force is repulsive and the sign (direction) is reversed. The charges $q_1$ and $q_2$ can be intrinsically either positive or negative, and in any specific application of the formula the direction of the force will come out right if the $q$’s are given the proper plus or minus sign; the force is directed along the line between the two charges. The constant in the formula depends, of course, upon what units are used for the force, the charge, and the distance. In current practice the charge is measured in coulombs, the distance in meters, and the force in newtons. Then, in order to get the force to come out properly in newtons, the constant (which for historical reasons is written $1/4\pi\epsO$) takes the numerical value \begin{equation*} \epsO=8.854\times10^{-12}\text{ coul}^2/\text{newton}\cdot\text{m}^2 \end{equation*} or \begin{equation*} 1/4\pi\epsO=8.99\times10^9\text{ N}\cdot\text{m}^2/\text{coul}^2. \end{equation*} Thus the force law for static charges is \begin{equation} \label{Eq:I:12:2} \FLPF=q_1q_2\FLPr/4\pi\epsO r^3. \end{equation} In nature, the most important charge of all is the charge on a single electron, which is $1.60\times10^{-19}$ coulomb. In working with electrical forces between fundamental particles rather than with large charges, many people prefer the combination $(q_{\text{el}})^2/4\pi\epsO$, in which $q_{\text{el}}$ is defined as the charge on an electron. This combination occurs frequently, and to simplify calculations it has been defined by the symbol $e^2$; its numerical value in the mks system of units turns out to be $(1.52\times10^{-14})^2$. The advantage of using the constant in this form is that the force between two electrons in newtons can then be written simply as $e^2/r^2$, with $r$ in meters, without all the individual constants. Electrical forces are much more complicated than this simple formula indicates, since the formula gives the force between two objects only when the objects are standing still. We shall consider the more general case shortly. In the analysis of forces of the more fundamental kinds (not such forces as friction, but the electrical force or the gravitational force), an interesting and very important concept has been developed. Since at first sight the forces are very much more complicated than is indicated by the inverse-square laws and these laws hold true only when the interacting bodies are standing still, an improved method is needed to deal with the very complex forces that ensue when the bodies start to move in a complicated way. Experience has shown that an approach known as the concept of a “field” is of great utility for the analysis of forces of this type. To illustrate the idea for, say, electrical force, suppose we have two electrical charges, $q_1$ and $q_2$, located at points $P$ and $R$ respectively. Then the force between the charges is given by \begin{equation} \label{Eq:I:12:3} \FLPF=q_1q_2\FLPr/4\pi\epsO r^3. \end{equation} To analyze this force by means of the field concept, we say that the charge $q_1$ at $P$ produces a “condition” at $R$, such that when the charge $q_2$ is placed at $R$ it “feels” the force. This is one way, strange perhaps, of describing it; we say that the force $\FLPF$ on $q_2$ at $R$ can be written in two parts. It is $q_2$ multiplied by a quantity $\FLPE$ that would be there whether $q_2$ were there or not (provided we keep all the other charges in their right places). $\FLPE$ is the “condition” produced by $q_1$, we say, and $\FLPF$ is the response of $q_2$ to $\FLPE$. $\FLPE$ is called an electric field, and it is a vector. The formula for the electric field $\FLPE$ that is produced at $R$ by a charge $q_1$ at $P$ is the charge $q_1$ times the constant $1/4\pi\epsO$ divided by $r^2$ ($r$ is the distance from $P$ to $R$), and it is acting in the direction of the radius vector (the radius vector $\FLPr$ divided by its own length). The expression for $\FLPE$ is thus \begin{equation} \label{Eq:I:12:4} \FLPE=q_1\FLPr/4\pi\epsO r^3. \end{equation} We then write \begin{equation} \label{Eq:I:12:5} \FLPF=q_2\,\FLPE, \end{equation} which expresses the force, the field, and the charge in the field. What is the point of all this? The point is to divide the analysis into two parts. One part says that something produces a field. The other part says that something is acted on by the field. By allowing us to look at the two parts independently, this separation of the analysis simplifies the calculation of a problem in many situations. If many charges are present, we first work out the total electric field produced at $R$ by all the charges, and then, knowing the charge that is placed at $R$, we find the force on it. In the case of gravitation, we can do exactly the same thing. In this case, where the force $\FLPF=-Gm_1m_2\FLPr/r^3$, we can make an analogous analysis, as follows: the force on a body in a gravitational field is the mass of that body times the field $\FLPC$. The force on $m_2$ is the mass $m_2$ times the field $\FLPC$ produced by $m_1$; that is, $\FLPF=m_2\FLPC$. Then the field $\FLPC$ produced by a body of mass $m_1$ is $\FLPC=-Gm_1\FLPr/r^3$ and it is directed radially, as in the electrical case. In spite of how it might at first seem, this separation of one part from another is not a triviality. It would be trivial, just another way of writing the same thing, if the laws of force were simple, but the laws of force are so complicated that it turns out that the fields have a reality that is almost independent of the objects which create them. One can do something like shake a charge and produce an effect, a field, at a distance; if one then stops moving the charge, the field keeps track of all the past, because the interaction between two particles is not instantaneous. It is desirable to have some way to remember what happened previously. If the force upon some charge depends upon where another charge was yesterday, which it does, then we need machinery to keep track of what went on yesterday, and that is the character of a field. So when the forces get more complicated, the field becomes more and more real, and this technique becomes less and less of an artificial separation. In analyzing forces by the use of fields, we need two kinds of laws pertaining to fields. The first is the response to a field, and that gives the equations of motion. For example, the law of response of a mass to a gravitational field is that the force is equal to the mass times the gravitational field; or, if there is also a charge on the body, the response of the charge to the electric field equals the charge times the electric field. The second part of the analysis of nature in these situations is to formulate the laws which determine the strength of the field and how it is produced. These laws are sometimes called the field equations. We shall learn more about them in due time, but shall write down a few things about them now. First, the most remarkable fact of all, which is true exactly and which can be easily understood, is that the total electric field produced by a number of sources is the vector sum of the electric fields produced by the first source, the second source, and so on. In other words, if we have numerous charges making a field, and if all by itself one of them would make the field $\FLPE_1$, another would make the field $\FLPE_2$, and so on, then we merely add the vectors to get the total field. This principle can be expressed as \begin{equation} \label{Eq:I:12:6} \FLPE=\FLPE_1+\FLPE_2+\FLPE_3+\dotsb \end{equation} or, in view of the definition given above, \begin{equation} \label{Eq:I:12:7} \FLPE=\sum_i\frac{q_i\FLPr_i}{4\pi\epsO r_i^3}. \end{equation} Can the same methods be applied to gravitation? The force between two masses $m_1$ and $m_2$ was expressed by Newton as $\FLPF=-Gm_1m_2\FLPr/r^3$. But according to the field concept, we may say that $m_1$ creates a field $\FLPC$ in all the surrounding space, such that the force on $m_2$ is given by \begin{equation} \label{Eq:I:12:8} \FLPF=m_2\FLPC. \end{equation} By complete analogy with the electrical case, \begin{equation} \label{Eq:I:12:9} \FLPC_i=-Gm_i\FLPr_i/r_i^3 \end{equation} and the gravitational field produced by several masses is \begin{equation} \label{Eq:I:12:10} \FLPC=\FLPC_1+\FLPC_2+\FLPC_3+\dotsb \end{equation} In Chapter 9, in working out a case of planetary motion, we used this principle in essence. We simply added all the force vectors to get the resultant force on a planet. If we divide out the mass of the planet in question, we get Eq. (12.10). Equations (12.6) and (12.10) express what is known as the principle of superposition of fields. This principle states that the total field due to all the sources is the sum of the fields due to each source. So far as we know today, for electricity this is an absolutely guaranteed law, which is true even when the force law is complicated because of the motions of the charges. There are apparent violations, but more careful analysis has always shown these to be due to the overlooking of certain moving charges. However, although the principle of superposition applies exactly for electrical forces, it is not exact for gravity if the field is too strong, and Newton’s equation (12.10) is only approximate, according to Einstein’s gravitational theory. Closely related to electrical force is another kind, called magnetic force, and this too is analyzed in terms of a field. Some of the qualitative relations between electrical and magnetic forces can be illustrated by an experiment with an electron-ray tube (Fig. 12–3). At one end of such a tube is a source that emits a stream of electrons. Within the tube are arrangements for accelerating the electrons to a high speed and sending some of them in a narrow beam to a fluorescent screen at the other end of the tube. A spot of light glows in the center of the screen where the electrons strike, and this enables us to trace the electron path. On the way to the screen the electron beam passes through a narrow space between a pair of parallel metal plates, which are arranged, say, horizontally. A voltage can be applied across the plates, so that either plate can be made negative at will. When such a voltage is present, there is an electric field between the plates. The first part of the experiment is to apply a negative voltage to the lower plate, which means that extra electrons have been placed on the lower plate. Since like charges repel, the light spot on the screen instantly shifts upward. (We could also say this in another way—that the electrons “felt” the field, and responded by deflecting upward.) We next reverse the voltage, making the upper plate negative. The light spot on the screen now jumps below the center, showing that the electrons in the beam were repelled by those in the plate above them. (Or we could say again that the electrons had “responded” to the field, which is now in the reverse direction.) The second part of the experiment is to disconnect the voltage from the plates and test the effect of a magnetic field on the electron beam. This is done by means of a horseshoe magnet, whose poles are far enough apart to more or less straddle the tube. Suppose we hold the magnet below the tube in the same orientation as the letter U, with its poles up and part of the tube in between. We note that the light spot is deflected, say, upward, as the magnet approaches the tube from below. So it appears that the magnet repels the electron beam. However, it is not that simple, for if we invert the magnet without reversing the poles side-for-side, and now approach the tube from above, the spot still moves upward, so the electron beam is not repelled; instead, it appears to be attracted this time. Now we start again, restoring the magnet to its original U orientation and holding it below the tube, as before. Yes, the spot is still deflected upward; but now turn the magnet $180$ degrees around a vertical axis, so that it is still in the U position but the poles are reversed side-for-side. Behold, the spot now jumps downward, and stays down, even if we invert the magnet and approach from above, as before. To understand this peculiar behavior, we have to have a new combination of forces. We explain it thus: Across the magnet from one pole to the other there is a magnetic field. This field has a direction which is always away from one particular pole (which we could mark) and toward the other. Inverting the magnet did not change the direction of the field, but reversing the poles side-for-side did reverse its direction. For example, if the electron velocity were horizontal in the $x$-direction and the magnetic field were also horizontal but in the $y$-direction, the magnetic force on the moving electrons would be in the $z$-direction, i.e., up or down, depending on whether the field was in the positive or negative $y$-direction. Although we shall not at the present time give the correct law of force between charges moving in an arbitrary manner, one relative to the other, because it is too complicated, we shall give one aspect of it: the complete law of the forces if the fields are known. The force on a charged object depends upon its motion; if, when the object is standing still at a given place, there is some force, this is taken to be proportional to the charge, the coefficient being what we call the electric field. When the object moves the force may be different, and the correction, the new “piece” of force, turns out to be dependent exactly linearly on the velocity, but at right angles to $\FLPv$ and to another vector quantity which we call the magnetic induction $\FLPB$. If the components of the electric field $\FLPE$ and the magnetic induction $\FLPB$ are, respectively, $(E_x,E_y,E_z)$ and ($B_x,B_y,B_z)$, and if the velocity $\FLPv$ has the components ($v_x,v_y,v_z)$, then the total electric and magnetic force on a moving charge $q$ has the components \begin{equation} \begin{alignedat}{7} &F_x&&=q(E_x&&+v_y&&B_z&&-v_z&&B_y&&),\\[.5ex] &F_y&&=q(E_y&&+v_z&&B_x&&-v_x&&B_z&&),\\[.5ex] &F_z&&=q(E_z&&+v_x&&B_y&&-v_y&&B_x&&). \end{alignedat} \label{Eq:I:12:11} \end{equation} If, for instance, the only component of the magnetic field were $B_y$ and the only component of the velocity were $v_x$, then the only term left in the magnetic force would be a force in the $z$-direction, at right angles to both $\FLPB$ and $\FLPv$. |
|
1 | 12 | Characteristics of Force | 5 | Pseudo forces | The next kind of force we shall discuss might be called a pseudo force. In Chapter 11 we discussed the relationship between two people, Joe and Moe, who use different coordinate systems. Let us suppose that the positions of a particle as measured by Joe are $x$ and by Moe are $x'$; then the laws are as follows: \begin{equation*} x=x'+s,\quad y=y',\quad z=z', \end{equation*} where $s$ is the displacement of Moe’s system relative to Joe’s. If we suppose that the laws of motion are correct for Joe, how do they look for Moe? We find first, that \begin{equation*} dx/dt=dx'/dt+ds/dt. \end{equation*} Previously, we considered the case where $s$ was constant, and we found that $s$ made no difference in the laws of motion, since $ds/dt = 0$; ultimately, therefore, the laws of physics were the same in both systems. But another case we can take is that $s = ut$, where $u$ is a uniform velocity in a straight line. Then $s$ is not constant, and $ds/dt$ is not zero, but is $u$, a constant. However, the acceleration $d^2x/dt^2$ is still the same as $d^2x'/dt^2$, because $du/dt = 0$. This proves the law that we used in Chapter 10, namely, that if we move in a straight line with uniform velocity the laws of physics will look the same to us as when we are standing still. That is the Galilean transformation. But we wish to discuss the interesting case where $s$ is still more complicated, say $s = at^2/2$. Then $ds/dt = at$ and $d^2s/dt^2 = a$, a uniform acceleration; or in a still more complicated case, the acceleration might be a function of time. This means that although the laws of motion from the point of view of Joe would look like \begin{equation*} m\,\frac{d^2x}{dt^2}=F_x, \end{equation*} the laws of motion as looked upon by Moe would appear as \begin{equation*} m\,\frac{d^2x'}{dt^2}=F_{x'}=F_x-ma. \end{equation*} That is, since Moe’s coordinate system is accelerating with respect to Joe’s, the extra term $ma$ comes in, and Moe will have to correct his forces by that amount in order to get Newton’s laws to work. In other words, here is an apparent, mysterious new force of unknown origin which arises, of course, because Moe has the wrong coordinate system. This is an example of a pseudo force; other examples occur in coordinate systems that are rotating. Another example of pseudo force is what is often called “centrifugal force.” An observer in a rotating coordinate system, e.g., in a rotating box, will find mysterious forces, not accounted for by any known origin of force, throwing things outward toward the walls. These forces are due merely to the fact that the observer does not have Newton’s coordinate system, which is the simplest coordinate system. Pseudo force can be illustrated by an interesting experiment in which we push a jar of water along a table, with acceleration. Gravity, of course, acts downward on the water, but because of the horizontal acceleration there is also a pseudo force acting horizontally and in a direction opposite to the acceleration. The resultant of gravity and pseudo force makes an angle with the vertical, and during the acceleration the surface of the water will be perpendicular to the resultant force, i.e., inclined at an angle with the table, with the water standing higher in the rearward side of the jar. When the push on the jar stops and the jar decelerates because of friction, the pseudo force is reversed, and the water stands higher in the forward side of the jar (Fig. 12–4). One very important feature of pseudo forces is that they are always proportional to the masses; the same is true of gravity. The possibility exists, therefore, that gravity itself is a pseudo force. Is it not possible that perhaps gravitation is due simply to the fact that we do not have the right coordinate system? After all, we can always get a force proportional to the mass if we imagine that a body is accelerating. For instance, a man shut up in a box that is standing still on the earth finds himself held to the floor of the box with a certain force that is proportional to his mass. But if there were no earth at all and the box were standing still, the man inside would float in space. On the other hand, if there were no earth at all and something were pulling the box along with an acceleration $g$, then the man in the box, analyzing physics, would find a pseudo force which would pull him to the floor, just as gravity does. Einstein put forward the famous hypothesis that accelerations give an imitation of gravitation, that the forces of acceleration (the pseudo forces) cannot be distinguished from those of gravity; it is not possible to tell how much of a given force is gravity and how much is pseudo force. It might seem all right to consider gravity to be a pseudo force, to say that we are all held down because we are accelerating upward, but how about the people in Madagascar, on the other side of the earth—are they accelerating too? Einstein found that gravity could be considered a pseudo force only at one point at a time, and was led by his considerations to suggest that the geometry of the world is more complicated than ordinary Euclidean geometry. The present discussion is only qualitative, and does not pretend to convey anything more than the general idea. To give a rough idea of how gravitation could be the result of pseudo forces, we present an illustration which is purely geometrical and does not represent the real situation. Suppose that we all lived in two dimensions, and knew nothing of a third. We think we are on a plane, but suppose we are really on the surface of a sphere. And suppose that we shoot an object along the ground, with no forces on it. Where will it go? It will appear to go in a straight line, but it has to remain on the surface of a sphere, where the shortest distance between two points is along a great circle; so it goes along a great circle. If we shoot another object similarly, but in another direction, it goes along another great circle. Because we think we are on a plane, we expect that these two bodies will continue to diverge linearly with time, but careful observation will show that if they go far enough they move closer together again, as though they were attracting each other. But they are not attracting each other—there is just something “weird” about this geometry. This particular illustration does not describe correctly the way in which Einstein’s geometry is “weird,” but it illustrates that if we distort the geometry sufficiently it is possible that all gravitation is related in some way to pseudo forces; that is the general idea of the Einsteinian theory of gravitation. |
|
1 | 12 | Characteristics of Force | 6 | Nuclear forces | We conclude this chapter with a brief discussion of the only other known forces, which are called nuclear forces. These forces are within the nuclei of atoms, and although they are much discussed, no one has ever calculated the force between two nuclei, and indeed at present there is no known law for nuclear forces. These forces have a very tiny range which is just about the same as the size of the nucleus, perhaps $10^{-13}$ centimeter. With particles so small and at such a tiny distance, only the quantum-mechanical laws are valid, not the Newtonian laws. In nuclear analysis we no longer think in terms of forces, and in fact we can replace the force concept with a concept of the energy of interaction of two particles, a subject that will be discussed later. Any formula that can be written for nuclear forces is a rather crude approximation which omits many complications; one might be somewhat as follows: forces within a nucleus do not vary inversely as the square of the distance, but die off exponentially over a certain distance $r$, as expressed by $F=(1/r^2)\exp(-r/r_0)$, where the distance $r_0$ is of the order of $10^{-13}$ centimeter. In other words, the forces disappear as soon as the particles are any great distance apart, although they are very strong within the $10^{-13}$ centimeter range. So far as they are understood today, the laws of nuclear force are very complex; we do not understand them in any simple way, and the whole problem of analyzing the fundamental machinery behind nuclear forces is unsolved. Attempts at a solution have led to the discovery of numerous strange particles, the $\pi$-mesons, for example, but the origin of these forces remains obscure. |
|
1 | 13 | Work and Potential Energy (A) | 1 | Energy of a falling body | In Chapter 4 we discussed the conservation of energy. In that discussion, we did not use Newton’s laws, but it is, of course, of great interest to see how it comes about that energy is in fact conserved in accordance with these laws. For clarity we shall start with the simplest possible example, and then develop harder and harder examples. The simplest example of the conservation of energy is a vertically falling object, one that moves only in a vertical direction. An object which changes its height under the influence of gravity alone has a kinetic energy $T$ (or K.E.) due to its motion during the fall, and a potential energy $mgh$, abbreviated $U$ (or P.E.), whose sum is constant: \begin{equation} \underset{\text{K.E.}}{\tfrac{1}{2}mv^2}+ \underset{\text{P.E.}}{\vphantom{\tfrac{1}{2}}mgh}=\text{const},\notag \end{equation} or \begin{equation} \label{Eq:I:13:1} T+U=\text{const}. \end{equation} Now we would like to show that this statement is true. What do we mean, show it is true? From Newton’s Second Law we can easily tell how the object moves, and it is easy to find out how the velocity varies with time, namely, that it increases proportionally with the time, and that the height varies as the square of the time. So if we measure the height from a zero point where the object is stationary, it is no miracle that the height turns out to be equal to the square of the velocity times a number of constants. However, let us look at it a little more closely. Let us find out directly from Newton’s Second Law how the kinetic energy should change, by taking the derivative of the kinetic energy with respect to time and then using Newton’s laws. When we differentiate $\tfrac{1}{2}mv^2$ with respect to time, we obtain \begin{equation} \label{Eq:I:13:2} \ddt{T}{t}=\ddt{}{t}\,(\tfrac{1}{2}mv^2)= \tfrac{1}{2}m2v\,\ddt{v}{t}=mv\,\ddt{v}{t}, \end{equation} since $m$ is assumed constant. But from Newton’s Second Law, $m(dv/dt)=F$, so that \begin{equation} \label{Eq:I:13:3} dT/dt = Fv. \end{equation} In general, it will come out to be $\FLPF\cdot\FLPv$, but in our one-dimensional case let us leave it as the force times the velocity. Now in our simple example the force is constant, equal to $-mg$, a vertical force (the minus sign means that it acts downward), and the velocity, of course, is the rate of change of the vertical position, or height $h$, with time. Thus the rate of change of the kinetic energy is $-mg(dh/dt)$, which quantity, miracle of miracles, is minus the rate of change of something else! It is minus the time rate of change of $mgh$! Therefore, as time goes on, the changes in kinetic energy and in the quantity $mgh$ are equal and opposite, so that the sum of the two quantities remains constant. Q.E.D. We have shown, from Newton’s second law of motion, that energy is conserved for constant forces when we add the potential energy $mgh$ to the kinetic energy $\tfrac{1}{2}mv^2$. Now let us look into this further and see whether it can be generalized, and thus advance our understanding. Does it work only for a freely falling body, or is it more general? We expect from our discussion of the conservation of energy that it would work for an object moving from one point to another in some kind of frictionless curve, under the influence of gravity (Fig. 13–1). If the object reaches a certain height $h$ from the original height $H$, then the same formula should again be right, even though the velocity is now in some direction other than the vertical. We would like to understand why the law is still correct. Let us follow the same analysis, finding the time rate of change of the kinetic energy. This will again be $mv(dv/dt)$, but $m(dv/dt)$ is the rate of change of the magnitude of the momentum, i.e., the force in the direction of motion—the tangential force $F_t$. Thus \begin{equation*} \ddt{T}{t}=mv\,\ddt{v}{t}=F_tv. \end{equation*} Now the speed is the rate of change of distance along the curve, $ds/dt$, and the tangential force $F_t$ is not $-mg$ but is weaker by the ratio of the vertical distance $dh$ to the distance $ds$ along the path. In other words, \begin{equation*} F_t=-mg\sin\theta=-mg\,\ddt{h}{s}, \end{equation*} so that \begin{equation*} F_t\,\ddt{s}{t}=-mg\biggl(\ddt{h}{s}\biggr) \biggl(\ddt{s}{t}\biggr)=-mg\,\ddt{h}{t}, \end{equation*} since the $ds$’s cancel. Thus we get $-mg(dh/dt)$, which is equal to the rate of change of $-mgh$, as before. In order to understand exactly how the conservation of energy works in general in mechanics, we shall now discuss a number of concepts which will help us to analyze it. First, we discuss the rate of change of kinetic energy in general in three dimensions. The kinetic energy in three dimensions is \begin{equation*} T=\tfrac{1}{2}m(v_x^2+v_y^2+v_z^2). \end{equation*} When we differentiate this with respect to time, we get three terrifying terms: \begin{equation} \label{Eq:I:13:4} \ddt{T}{t}=m\biggl( v_x\,\ddt{v_x}{t}+ v_y\,\ddt{v_y}{t}+ v_z\,\ddt{v_z}{t} \biggr). \end{equation} But $m(dv_x/dt)$ is the force $F_x$ acting on the object in the $x$-direction. Thus the right side of Eq. (13.4) is $F_xv_x + F_yv_y + F_zv_z$. We recall our vector analysis and recognize this as $\FLPF\cdot\FLPv$; therefore \begin{equation} \label{Eq:I:13:5} dT/dt=\FLPF\cdot\FLPv. \end{equation} This result can be derived more quickly as follows: if $\FLPa$ and $\FLPb$ are two vectors, both of which may depend upon the time, the derivative of $\FLPa\cdot\FLPb$ is, in general, \begin{equation} \label{Eq:I:13:6} d(\FLPa\cdot\FLPb)/dt=\FLPa\cdot(d\FLPb/dt)+(d\FLPa/dt)\cdot\FLPb. \end{equation} We then use this in the form $\FLPa =$ $\FLPb =$ $\FLPv$: \begin{equation} \label{Eq:I:13:7} \ddt{(\tfrac{1}{2}mv^2)}{t}=\ddt{(\tfrac{1}{2}m\FLPv\cdot\FLPv)}{t}= m\,\ddt{\FLPv}{t}\cdot\FLPv=\FLPF\cdot\FLPv=\FLPF\cdot\ddt{\FLPs}{t}. \end{equation}
\begin{equation} \begin{aligned} \ddt{(\tfrac{1}{2}mv^2)}{t}&=\ddt{(\tfrac{1}{2}m\FLPv\cdot\FLPv)}{t} =m\,\ddt{\FLPv}{t}\cdot\FLPv\\[1.5ex] &=\FLPF\cdot\FLPv=\FLPF\cdot\ddt{\FLPs}{t}. \end{aligned} \label{Eq:I:13:7} \end{equation}
Because the concepts of kinetic energy, and energy in general, are so important, various names have been given to the important terms in equations such as these. $\tfrac{1}{2}mv^2$ is, as we know, called kinetic energy. $\FLPF\cdot\FLPv$ is called power: the force acting on an object times the velocity of the object (vector “dot” product) is the power being delivered to the object by that force. We thus have a marvelous theorem: the rate of change of kinetic energy of an object is equal to the power expended by the forces acting on it. However, to study the conservation of energy, we want to analyze this still more closely. Let us evaluate the change in kinetic energy in a very short time $dt$. If we multiply both sides of Eq. (13.7) by $dt$, we find that the differential change in the kinetic energy is the force “dot” the differential distance moved: \begin{equation} \label{Eq:I:13:8} dT=\FLPF\cdot d\FLPs. \end{equation} If we now integrate, we get \begin{equation} \label{Eq:I:13:9} \Delta T=\int_1^2\FLPF\cdot d\FLPs. \end{equation} What does this mean? It means that if an object is moving in any way under the influence of a force, moving in some kind of curved path, then the change in K.E. when it goes from one point to another along the curve is equal to the integral of the component of the force along the curve times the differential displacement $ds$, the integral being carried out from one point to the other. This integral also has a name; it is called the work done by the force on the object. We see immediately that power equals work done per second. We also see that it is only a component of force in the direction of motion that contributes to the work done. In our simple example the forces were only vertical, and had only a single component, say $F_z$, equal to $-mg$. No matter how the object moves in those circumstances, falling in a parabola for example, $\FLPF\cdot\FLPs$, which can be written as $F_x\,dx + F_y\,dy + F_z\,dz$, has nothing left of it but $F_z\,dz = -mg\,dz$, because the other components of force are zero. Therefore, in our simple case, \begin{equation} \label{Eq:I:13:10} \int_1^2\FLPF\cdot d\FLPs=\int_{z_1}^{z_2}-mg\,dz=-mg(z_2-z_1), \end{equation} so again we find that it is only the vertical height from which the object falls that counts toward the potential energy. A word about units. Since forces are measured in newtons, and we multiply by a distance in order to obtain work, work is measured in newton${}\cdot{}$meters (N${}\cdot{}$m), but people do not like to say newton-meters, they prefer to say joules (J). A newton-meter is called a joule; work is measured in joules. Power, then, is joules per second, and that is also called a watt (W). If we multiply watts by time, the result is the work done. The work done by the electrical company in our houses, technically, is equal to the watts times the time. That is where we get things like kilowatt hours, $1000$ watts times $3600$ seconds, or $3.6\times10^6$ joules. Now we take another example of the law of conservation of energy. Consider an object which initially has kinetic energy and is moving very fast, and which slides against the floor with friction. It stops. At the start the kinetic energy is not zero, but at the end it is zero; there is work done by the forces, because whenever there is friction there is always a component of force in a direction opposite to that of the motion, and so energy is steadily lost. But now let us take a mass on the end of a pivot swinging in a vertical plane in a gravitational field with no friction. What happens here is different, because when the mass is going up the force is downward, and when it is coming down, the force is also downward. Thus $\FLPF\cdot d\FLPs$ has one sign going up and another sign coming down. At each corresponding point of the downward and upward paths the values of $\FLPF\cdot d\FLPs$ are exactly equal in size but of opposite sign, so the net result of the integral will be zero for this case. Thus the kinetic energy with which the mass comes back to the bottom is the same as it had when it left; that is the principle of the conservation of energy. (Note that when there are friction forces the conservation of energy seems at first sight to be invalid. We have to find another form of energy. It turns out, in fact, that heat is generated in an object when it rubs another with friction, but at the moment we supposedly do not know that.) |
|
1 | 13 | Work and Potential Energy (A) | 2 | Work done by gravity | The next problem to be discussed is much more difficult than the above; it has to do with the case when the forces are not constant, or simply vertical, as they were in the cases we have worked out. We want to consider a planet, for example, moving around the sun, or a satellite in the space around the earth. We shall first consider the motion of an object which starts at some point $1$ and falls, say, directly toward the sun or toward the earth (Fig. 13–2). Will there be a law of conservation of energy in these circumstances? The only difference is that in this case, the force is changing as we go along, it is not just a constant. As we know, the force is $-GM/r^2$ times the mass $m$, where $m$ is the mass that moves. Now certainly when a body falls toward the earth, the kinetic energy increases as the distance fallen increases, just as it does when we do not worry about the variation of force with height. The question is whether it is possible to find another formula for potential energy different from $mgh$, a different function of distance away from the earth, so that conservation of energy will still be true. This one-dimensional case is easy to treat because we know that the change in the kinetic energy is equal to the integral, from one end of the motion to the other, of $-GMm/r^2$ times the displacement $dr$: \begin{equation} \label{Eq:I:13:11} T_2-T_1=-\int_1^2GMm\,\frac{dr}{r^2}. \end{equation} There are no cosines needed for this case because the force and the displacement are in the same direction. It is easy to integrate $dr/r^2$; the result is $-1/r$, so Eq. (13.11) becomes \begin{equation} \label{Eq:I:13:12} T_2-T_1=+GMm\biggr(\frac{1}{r_2}-\frac{1}{r_1}\biggr). \end{equation} Thus we have a different formula for potential energy. Equation (13.12) tells us that the quantity $(\tfrac{1}{2}mv^2 - GMm/r)$ calculated at point $1$, at point $2$, or at any other place, has a constant value. We now have the formula for the potential energy in a gravitational field for vertical motion. Now we have an interesting problem. Can we make perpetual motion in a gravitational field? The gravitational field varies; in different places it is in different directions and has different strengths. Could we do something like this, using a fixed, frictionless track: start at some point and lift an object out to some other point, then move it around an arc to a third point, then lower it a certain distance, then move it in at a certain slope and pull it out some other way, so that when we bring it back to the starting point, a certain amount of work has been done by the gravitational force, and the kinetic energy of the object is increased? Can we design the curve so that it comes back moving a little bit faster than it did before, so that it goes around and around and around, and gives us perpetual motion? Since perpetual motion is impossible, we ought to find out that this is also impossible. We ought to discover the following proposition: since there is no friction the object should come back with neither higher nor lower velocity—it should be able to keep going around and around any closed path. Stated in another way, the total work done in going around a complete cycle should be zero for gravity forces, because if it is not zero we can get energy out by going around. (If the work turns out to be less than zero, so that we get less speed when we go around one way, then we merely go around the other way, because the forces, of course, depend only upon the position, not upon the direction; if one way is plus, the other way would be minus, so unless it is zero we will get perpetual motion by going around either way.) Is the work really zero? Let us try to demonstrate that it is. First we shall explain more or less why it is zero, and then we shall examine it a little better mathematically. Suppose that we use a simple path such as that shown in Fig. 13–3, in which a small mass is carried from point $1$ to point $2$, and then is made to go around a circle to $3$, back to $4$, then to $5$, $6$, $7$, and $8$, and finally back to $1$. All of the lines are either purely radial or circular, with $M$ as the center. How much work is done in carrying $m$ around this path? Between points $1$ and $2$, it is $GMm$ times the difference of $1/r$ between these two points: \begin{equation*} W_{12}=\int_1^2\FLPF\cdot d\FLPs=\int_1^2-GMm\,\frac{dr}{r^2}= GMm\biggr(\frac{1}{r_2}-\frac{1}{r_1}\biggr). \end{equation*} From $2$ to $3$ the force is exactly at right angles to the curve, so that $W_{23}\equiv0$. The work from $3$ to $4$ is \begin{equation*} W_{34}=\int_3^4\FLPF\cdot d\FLPs= GMm\biggr(\frac{1}{r_4}-\frac{1}{r_3}\biggr). \end{equation*} In the same fashion, we find that $W_{45} = 0$, $W_{56} =GMm(1/r_6 - 1/r_5)$, $W_{67} = 0$, $W_{78} =GMm(1/r_8 - 1/r_7)$, and $W_{81} = 0$. Thus \begin{equation*} W=GMm\biggl( \frac{1}{r_2}-\frac{1}{r_1}+ \frac{1}{r_4}-\frac{1}{r_3}+ \frac{1}{r_6}-\frac{1}{r_5}+ \frac{1}{r_8}-\frac{1}{r_7} \biggr). \end{equation*} But we note that $r_2 = r_3$, $r_4 = r_5$, $r_6 = r_7$, and $r_8 = r_1$. Therefore $W=0$. Of course we may wonder whether this is too trivial a curve. What if we use a real curve? Let us try it on a real curve. First of all, we might like to assert that a real curve could always be imitated sufficiently well by a series of sawtooth jiggles like those of Fig. 13–4, and that therefore, etc., Q.E.D., but without a little analysis, it is not obvious at first that the work done going around even a small triangle is zero. Let us magnify one of the triangles, as shown in Fig. 13–4. Is the work done in going from $a$ to $b$ and $b$ to $c$ on a triangle the same as the work done in going directly from $a$ to $c$? Suppose that the force is acting in a certain direction; let us take the triangle such that the side $bc$ is in this direction, just as an example. We also suppose that the triangle is so small that the force is essentially constant over the entire triangle. What is the work done in going from $a$ to $c$? It is \begin{equation*} W_{ac}=\int_a^c\FLPF\cdot d\FLPs=Fs\cos\theta, \end{equation*} since the force is constant. Now let us calculate the work done in going around the other two sides of the triangle. On the vertical side $ab$ the force is perpendicular to $d\FLPs$, so that here the work is zero. On the horizontal side $bc$, \begin{equation*} W_{bc}=\int_b^c\FLPF\cdot d\FLPs=Fx. \end{equation*} Thus we see that the work done in going along the sides of a small triangle is the same as that done going on a slant, because $s\cos\theta$ is equal to $x$. We have proved previously that the answer is zero for any path composed of a series of notches like those of Fig. 13–3, and also that we do the same work if we cut across the corners instead of going along the notches (so long as the notches are fine enough, and we can always make them very fine); therefore, the work done in going around any path in a gravitational field is zero. This is a very remarkable result. It tells us something we did not previously know about planetary motion. It tells us that when a planet moves around the sun (without any other objects around, no other forces) it moves in such a manner that the square of the speed at any point minus some constants divided by the radius at that point is always the same at every point on the orbit. For example, the closer the planet is to the sun, the faster it is going, but by how much? By the following amount: if instead of letting the planet go around the sun, we were to change the direction (but not the magnitude) of its velocity and make it move radially, and then we let it fall from some special radius to the radius of interest, the new speed would be the same as the speed it had in the actual orbit, because this is just another example of a complicated path. So long as we come back to the same distance, the kinetic energy will be the same. So, whether the motion is the real, undisturbed one, or is changed in direction by channels, by frictionless constraints, the kinetic energy with which the planet arrives at a point will be the same. Thus, when we make a numerical analysis of the motion of the planet in its orbit, as we did earlier, we can check whether or not we are making appreciable errors by calculating this constant quantity, the energy, at every step, and it should not change. For the orbit of Table 9–2 the energy does change,1 it changes by some $1.5$ percent from the beginning to the end. Why? Either because for the numerical method we use finite intervals, or else because we made a slight mistake somewhere in arithmetic. Let us consider the energy in another case: the problem of a mass on a spring. When we displace the mass from its balanced position, the restoring force is proportional to the displacement. In those circumstances, can we work out a law for conservation of energy? Yes, because the work done by such a force is \begin{equation} \label{Eq:I:13:13} W=\int_0^xF\,dx=\int_0^x-kx\,dx=-\tfrac{1}{2}kx^2. \end{equation} Therefore, for a mass on a spring we have that the kinetic energy of the oscillating mass plus $\tfrac{1}{2}kx^2$ is a constant. Let us see how this works. We pull the mass down; it is standing still and so its speed is zero. But $x$ is not zero, $x$ is at its maximum, so there is some energy, the potential energy, of course. Now we release the mass and things begin to happen (the details not to be discussed), but at any instant the kinetic plus potential energy must be a constant. For example, after the mass is on its way past the original equilibrium point, the position $x$ equals zero, but that is when it has its biggest $v^2$, and as it gets more $x^2$ it gets less $v^2$, and so on. So the balance of $x^2$ and $v^2$ is maintained as the mass goes up and down. Thus we have another rule now, that the potential energy for a spring is $\tfrac{1}{2}kx^2$, if the force is $-kx$. |
|
1 | 13 | Work and Potential Energy (A) | 3 | Summation of energy | Now we go on to the more general consideration of what happens when there are large numbers of objects. Suppose we have the complicated problem of many objects, which we label $i = 1$, $2$, $3$, …, all exerting gravitational pulls on each other. What happens then? We shall prove that if we add the kinetic energies of all the particles, and add to this the sum, over all pairs of particles, of their mutual gravitational potential energy, $-GMm/r_{ij}$, the total is a constant: \begin{equation} \label{Eq:I:13:14} \sum_i\tfrac{1}{2}m_iv_i^2\;+\!\!\sum_{(\text{pairs $ij$})} \!\!-\frac{Gm_im_j}{r_{ij}}=\text{const}. \end{equation} How do we prove it? We differentiate each side with respect to time and get zero. When we differentiate $\tfrac{1}{2}m_iv_i^2$, we find derivatives of the velocity that are the forces, just as in Eq. (13.5). We replace these forces by the law of force that we know from Newton’s law of gravity and then we notice that what is left is minus the time derivative of \begin{equation*} \sum_{\text{pairs}} -\frac{Gm_im_j}{r_{ij}}. \end{equation*} The time derivative of the kinetic energy is \begin{equation} \begin{aligned} \ddt{}{t}\sum_i\tfrac{1}{2}m_iv_i^2 &=\sum_im_i\,\ddt{\FLPv_i}{t}\cdot\FLPv_i\\[.5ex] &=\sum_i\FLPF_i\cdot\FLPv_i\\[-.5ex] &=\sum_i\Biggl( \sum_j-\frac{Gm_im_j\FLPr_{ij}}{r_{ij}^3} \Biggr)\cdot\FLPv_i. \end{aligned} \label{Eq:I:13:15} \end{equation} The time derivative of the potential energy is \begin{equation*} \ddt{}{t}\sum_{\text{pairs}} -\frac{Gm_im_j}{r_{ij}}= \sum_{\text{pairs}} \Biggl(+\frac{Gm_im_j}{r_{ij}^2}\Biggr) \biggl(\ddt{r_{ij}}{t}\biggr). \end{equation*} But \begin{equation*} r_{ij}=\sqrt{(x_i-x_j)^2+(y_i-y_j)^2+(z_i-z_j)^2}, \end{equation*} so that \begin{align*} \ddt{r_{ij}}{t}&= \begin{alignedat}[t]{5} \frac{1}{2r_{ij}}\biggl[ &2(x_i&&-x_j&&)\biggl(\ddt{x_i}{t}&&-\ddt{x_j}{t}&&\biggr)\\ {}+{}&2(y_i&&-y_j&&)\biggl(\ddt{y_i}{t}&&-\ddt{y_j}{t}&&\biggr)\\ {}+{}&2(z_i&&-z_j&&)\biggl(\ddt{z_i}{t}&&-\ddt{z_j}{t}&&\biggr) \biggr] \end{alignedat}\\[1ex] &=\FLPr_{ij}\cdot\frac{\FLPv_i-\FLPv_j}{r_{ij}}\\[.5ex] &=\FLPr_{ij}\cdot\frac{\FLPv_i}{r_{ij}}+ \FLPr_{ji}\cdot\frac{\FLPv_j}{r_{ji}}, \end{align*} since $\FLPr_{ij} = -\FLPr_{ji}$, while $r_{ij} = r_{ji}$. Thus \begin{equation} \label{Eq:I:13:16} \ddt{}{t}\sum_{\text{pairs}}-\frac{Gm_im_j}{r_{ij}}= \sum_{\text{pairs}}\biggl[ \frac{Gm_im_j\FLPr_{ij}}{r_{ij}^3}\cdot\FLPv_i+ \frac{Gm_jm_i\FLPr_{ji}}{r_{ji}^3}\cdot\FLPv_j\biggr]. \end{equation}
\begin{equation} \begin{gathered} \ddt{}{t}\sum_{\text{pairs}}-\frac{Gm_im_j}{r_{ij}}=\\[2ex] \sum_{\text{pairs}}\biggl[ \frac{Gm_im_j\FLPr_{ij}}{r_{ij}^3}\cdot\FLPv_i+ \frac{Gm_jm_i\FLPr_{ji}}{r_{ji}^3}\cdot\FLPv_j\biggr]. \end{gathered} \label{Eq:I:13:16} \end{equation} Now we must note carefully what $\sum\limits_i\{\sum\limits_j\}$ and $\sum\limits_{\text{pairs}}$ mean. In Eq. (13.15), $\sum\limits_i\{\sum\limits_j\}$ means that $i$ takes on all values $i=1$, $2$, $3$, … in turn, and for each value of $i$, the index $j$ takes on all values except $i$. Thus if $i = 3$, $j$ takes on the values $1$, $2$, $4$, … In Eq. (13.16), on the other hand, $\sum\limits_{\text{pairs}}$ means that given values of $i$ and $j$ occur only once. Thus the particle pair $1$ and $3$ contributes only one term to the sum. To keep track of this, we might agree to let $i$ range over all values $1$, $2$, $3$, …, and for each $i$ let $j$ range only over values greater than $i$. Thus if $i = 3$, $j$ could only have values $4$, $5$, $6$, … But we notice that for each $i,j$ value there are two contributions to the sum, one involving $\FLPv_i$, and the other $\FLPv_j$, and that these terms have the same appearance as those of Eq. (13.15), where all values of $i$ and $j$ (except $i = j$) are included in the sum. Therefore, by matching the terms one by one, we see that Eqs. (13.16) and (13.15) are precisely the same, but of opposite sign, so that the time derivative of the kinetic plus potential energy is indeed zero. Thus we see that, for many objects, the kinetic energy is the sum of the contributions from each individual object, and that the potential energy is also simple, it being also just a sum of contributions, the energies between all the pairs. We can understand why it should be the energy of every pair this way: Suppose that we want to find the total amount of work that must be done to bring the objects to certain distances from each other. We may do this in several steps, bringing them in from infinity where there is no force, one by one. First we bring in number one, which requires no work, since no other objects are yet present to exert force on it. Next we bring in number two, which does take some work, namely $W_{12}=-Gm_1m_2/r_{12}$. Now, and this is an important point, suppose we bring in the next object to position three. At any moment the force on number $3$ can be written as the sum of two forces—the force exerted by number $1$ and that exerted by number $2$. Therefore the work done is the sum of the works done by each, because if $\FLPF_3$ can be resolved into the sum of two forces, \begin{equation*} \FLPF_3=\FLPF_{13}+\FLPF_{23}, \end{equation*} then the work is \begin{equation*} \int\FLPF_3\cdot d\FLPs= \int\FLPF_{13}\cdot d\FLPs+\int\FLPF_{23}\cdot d\FLPs= W_{13}+W_{23}. \end{equation*} That is, the work done is the sum of the work done against the first force and the second force, as if each acted independently. Proceeding in this way, we see that the total work required to assemble the given configuration of objects is precisely the value given in Eq. (13.14) as the potential energy. It is because gravity obeys the principle of superposition of forces that we can write the potential energy as a sum over each pair of particles. |
|
1 | 13 | Work and Potential Energy (A) | 4 | Gravitational field of large objects | Now we shall calculate the fields which are met in a few physical circumstances involving distributions of mass. We have not so far considered distributions of mass, only particles, so it is interesting to calculate the forces when they are produced by more than just one particle. First we shall find the gravitational force on a mass that is produced by a plane sheet of material, infinite in extent. The force on a unit mass at a given point $P$, produced by this sheet of material (Fig. 13–5), will of course be directed toward the sheet. Let the distance of the point from the sheet be $a$, and let the amount of mass per unit area of this huge sheet be $\mu$. We shall suppose $\mu$ to be constant; it is a uniform sheet of material. Now, what small field $d\FLPC$ is produced by the mass $dm$ lying between $\rho$ and $\rho + d\rho$ from the point $O$ of the sheet nearest point $P$? Answer: $d\FLPC = -G(dm\,\FLPr/r^3)$. But this field is directed along $\FLPr$, and we know that only the $x$-component of it will remain when we add all the little vector $d\FLPC$’s to produce $\FLPC$. The $x$-component of $d\FLPC$ is \begin{equation*} dC_x=-G\,\frac{dm\,r_x}{r^3}=-G\,\frac{dm\,a}{r^3}. \end{equation*} Now all masses $dm$ which are at the same distance $r$ from $P$ will yield the same $dC_x$, so we may at once write for $dm$ the total mass in the ring between $\rho$ and $\rho+d\rho$, namely $dm = \mu2\pi\rho\,d\rho$ ($2\pi\rho\,d\rho$ is the area of a ring of radius $\rho$ and width $d\rho$, if $d\rho\ll\rho$). Thus \begin{equation*} dC_x=-G\mu2\pi\rho\,\frac{d\rho\,a}{r^3}. \end{equation*} Then, since $r^2 = \rho^2 + a^2$, $\rho\,d\rho = r\,dr$. Therefore, \begin{equation} \label{Eq:I:13:17} C_x=-2\pi G\mu a\int_a^\infty\frac{dr}{r^2}= -2\pi G\mu a\Bigl(\frac{1}{a}-\frac{1}{\infty}\Bigr)=-2\pi G\mu. \end{equation}
\begin{equation} \begin{aligned} C_x&=-2\pi G\mu a\int_a^\infty\frac{dr}{r^2}\\[1ex] &= -2\pi G\mu a\Bigl(\frac{1}{a}-\frac{1}{\infty}\Bigr)=-2\pi G\mu. \end{aligned} \label{Eq:I:13:17} \end{equation} Thus the force is independent of distance $a$! Why? Have we made a mistake? One might think that the farther away we go, the weaker the force would be. But no! If we are close, most of the matter is pulling at an unfavorable angle; if we are far away, more of the matter is situated more favorably to exert a pull toward the plane. At any distance, the matter which is most effective lies in a certain cone. When we are farther away the force is smaller by the inverse square, but in the same cone, in the same angle, there is much more matter, larger by just the square of the distance! This analysis can be made rigorous by just noticing that the differential contribution in any given cone is in fact independent of the distance, because of the reciprocal variation of the strength of the force from a given mass, and the amount of mass included in the cone, with changing distance. The force is not really constant of course, because when we go on the other side of the sheet it is reversed in sign. We have also, in effect, solved an electrical problem: if we have an electrically charged plate, with an amount $\sigma$ of charge per unit area, then the electric field at a point outside the sheet is equal to $\sigma/2\epsO$, and is in the outward direction if the sheet is positively charged, and inward if the sheet is negatively charged. To prove this, we merely note that $-G$, for gravity, plays the same role as $1/4\pi\epsO$ for electricity. Now suppose that we have two plates, with a positive charge $+\sigma$ on one and a negative charge $-\sigma$ on another at a distance $D$ from the first. What is the field? Outside the two plates it is zero. Why? Because one attracts and the other repels, the force being independent of distance, so that the two balance out! Also, the field between the two plates is clearly twice as great as that from one plate, namely $E=\sigma/\epsO$, and is directed from the positive plate to the negative one. Now we come to a most interesting and important problem, whose solution we have been assuming all the time, namely, that the force produced by the earth at a point on the surface or outside it is the same as if all the mass of the earth were located at its center. The validity of this assumption is not obvious, because when we are close, some of the mass is very close to us, and some is farther away, and so on. When we add the effects all together, it seems a miracle that the net force is exactly the same as we would get if we put all the mass in the middle! We now demonstrate the correctness of this miracle. In order to do so, however, we shall consider a thin uniform hollow shell instead of the whole earth. Let the total mass of the shell be $m$, and let us calculate the potential energy of a particle of mass $m'$ a distance $R$ away from the center of the sphere (Fig. 13–6) and show that the potential energy is the same as it would be if the mass $m$ were a point at the center. (The potential energy is easier to work with than is the field because we do not have to worry about angles, we merely add the potential energies of all the pieces of mass.) If we call $x$ the distance of a certain plane section from the center, then all the mass that is in a slice $dx$ is at the same distance $r$ from $P$, and the potential energy due to this ring is $-Gm'\,dm/r$. How much mass is in the small slice $dx$? An amount \begin{equation*} dm=2\pi y\mu\,ds=\frac{2\pi y\mu\,dx}{\sin\theta}= \frac{2\pi y\mu\,dx\,a}{y}=2\pi a\mu\,dx, \end{equation*} where $\mu=m/4\pi a^2$ is the surface density of mass on the spherical shell. (It is a general rule that the area of a zone of a sphere is proportional to its axial width.) Therefore the potential energy due to $dm$ is \begin{equation*} dW=-\frac{Gm'\,dm}{r}=-\frac{Gm'2\pi a\mu\,dx}{r}. \end{equation*} But we see that \begin{equation*} \begin{aligned} r^2=y^2+(R-x)^2&=y^2+x^2+R^2-2Rx\\[.5ex] &=a^2+R^2-2Rx. \end{aligned} \end{equation*} Thus \begin{equation*} 2r\,dr=-2R\,dx \end{equation*} or \begin{equation*} \frac{dx}{r}=-\frac{dr}{R}. \end{equation*} Therefore, \begin{equation*} dW=\frac{Gm'2\pi a\mu\,dr}{R}, \end{equation*} and so \begin{align} W&=\frac{Gm'2\pi a\mu}{R}\int_{R+a}^{R-a}dr\notag\\[1ex] &=-\frac{Gm'2\pi a\mu}{R}\,2a= -\frac{Gm'(4\pi a^2\mu)}{R}\notag\\[2ex] \label{Eq:I:13:18} &=-\frac{Gm'm}{R}. \end{align} Thus, for a thin spherical shell, the potential energy of a mass $m'$, external to the shell, is the same as though the mass of the shell were concentrated at its center. The earth can be imagined as a series of spherical shells, each one of which contributes an energy which depends only on its mass and the distance from its center to the particle; adding them all together we get the total mass, and therefore the earth acts as though all the material were at the center! But notice what happens if our point is on the inside of the shell. Making the same calculation, but with $P$ on the inside, we still get the difference of the two $r$’s, but now in the form $a - R - (a + R) = -2R$, or minus twice the distance from the center. In other words, $W$ comes out to be $W=-Gm'm/a$, which is independent of $R$ and independent of position, i.e., the same energy no matter where we are inside. Therefore no force; no work is done when we move about inside. If the potential energy is the same no matter where an object is placed inside the sphere, there can be no force on it. So there is no force inside, there is only a force outside, and the force outside is the same as though the mass were all at the center. |
|
1 | 14 | Work and Potential Energy (conclusion) | 1 | Work | In the preceding chapter we have presented a great many new ideas and results that play a central role in physics. These ideas are so important that it seems worthwhile to devote a whole chapter to a closer examination of them. In the present chapter we shall not repeat the “proofs” or the specific tricks by which the results were obtained, but shall concentrate instead upon a discussion of the ideas themselves. In learning any subject of a technical nature where mathematics plays a role, one is confronted with the task of understanding and storing away in the memory a huge body of facts and ideas, held together by certain relationships which can be “proved” or “shown” to exist between them. It is easy to confuse the proof itself with the relationship which it establishes. Clearly, the important thing to learn and to remember is the relationship, not the proof. In any particular circumstance we can either say “it can be shown that” such and such is true, or we can show it. In almost all cases, the particular proof that is used is concocted, first of all, in such form that it can be written quickly and easily on the chalkboard or on paper, and so that it will be as smooth-looking as possible. Consequently, the proof may look deceptively simple, when in fact, the author might have worked for hours trying different ways of calculating the same thing until he has found the neatest way, so as to be able to show that it can be shown in the shortest amount of time! The thing to be remembered, when seeing a proof, is not the proof itself, but rather that it can be shown that such and such is true. Of course, if the proof involves some mathematical procedures or “tricks” that one has not seen before, attention should be given not to the trick exactly, but to the mathematical idea involved. It is certain that in all the demonstrations that are made in a course such as this, not one has been remembered from the time when the author studied freshman physics. Quite the contrary: he merely remembers that such and such is true, and to explain how it can be shown he invents a demonstration at the moment it is needed. Anyone who has really learned a subject should be able to follow a similar procedure, but it is no use remembering the proofs. That is why, in this chapter, we shall avoid the proofs of the various statements made previously, and merely summarize the results. The first idea that has to be digested is work done by a force. The physical word “work” is not the word in the ordinary sense of “Workers of the world unite!,” but is a different idea. Physical work is expressed as $\int\FLPF\cdot d\FLPs$, called “the line integral of $F$ dot $ds$,” which means that if the force, for instance, is in one direction and the object on which the force is working is displaced in a certain direction, then only the component of force in the direction of the displacement does any work. If, for instance, the force were constant and the displacement were a finite distance $\Delta\FLPs$, then the work done in moving the object through that distance is only the component of force along $\Delta\FLPs$ times $\Delta s$. The rule is “force times distance,” but we really mean only the component of force in the direction of the displacement times $\Delta s$ or, equivalently, the component of displacement in the direction of force times $F$. It is evident that no work whatsoever is done by a force which is at right angles to the displacement. Now if the vector displacement $\Delta\FLPs$ is resolved into components, in other words, if the actual displacement is $\Delta\FLPs$ and we want to consider it effectively as a component of displacement $\Delta x$ in the $x$-direction, $\Delta y$ in the $y$-direction, and $\Delta z$ in the $z$-direction, then the work done in carrying an object from one place to another can be calculated in three parts, by calculating the work done along $x$, along $y$, and along $z$. The work done in going along $x$ involves only that component of force, namely $F_x$, and so on, so the work is $F_x\,\Delta x + F_y\,\Delta y + F_z\,\Delta z$. When the force is not constant, and we have a complicated curved motion, then we must resolve the path into a lot of little $\Delta\FLPs$’s, add the work done in carrying the object along each $\Delta\FLPs$, and take the limit as $\Delta\FLPs$ goes to zero. This is the meaning of the “line integral.” Everything we have just said is contained in the formula $W=\int\FLPF\cdot d\FLPs$. It is all very well to say that it is a marvelous formula, but it is another thing to understand what it means, or what some of the consequences are. The word “work” in physics has a meaning so different from that of the word as it is used in ordinary circumstances that it must be observed carefully that there are some peculiar circumstances in which it appears not to be the same. For example, according to the physical definition of work, if one holds a hundred-pound weight off the ground for a while, he is doing no work. Nevertheless, everyone knows that he begins to sweat, shake, and breathe harder, as if he were running up a flight of stairs. Yet running upstairs is considered as doing work (in running downstairs, one gets work out of the world, according to physics), but in simply holding an object in a fixed position, no work is done. Clearly, the physical definition of work differs from the physiological definition, for reasons we shall briefly explore. It is a fact that when one holds a weight he has to do “physiological” work. Why should he sweat? Why should he need to consume food to hold the weight up? Why is the machinery inside him operating at full throttle, just to hold the weight up? Actually, the weight could be held up with no effort by just placing it on a table; then the table, quietly and calmly, without any supply of energy, is able to maintain the same weight at the same height! The physiological situation is something like the following. There are two kinds of muscles in the human body and in other animals: one kind, called striated or skeletal muscle, is the type of muscle we have in our arms, for example, which is under voluntary control; the other kind, called smooth muscle, is like the muscle in the intestines or, in the clam, the greater adductor muscle that closes the shell. The smooth muscles work very slowly, but they can hold a “set”; that is to say, if the clam tries to close its shell in a certain position, it will hold that position, even if there is a very great force trying to change it. It will hold a position under load for hours and hours without getting tired because it is very much like a table holding up a weight, it “sets” into a certain position, and the molecules just lock there temporarily with no work being done, no effort being generated by the clam. The fact that we have to generate effort to hold up a weight is simply due to the design of striated muscle. What happens is that when a nerve impulse reaches a muscle fiber, the fiber gives a little twitch and then relaxes, so that when we hold something up, enormous volleys of nerve impulses are coming in to the muscle, large numbers of twitches are maintaining the weight, while the other fibers relax. We can see this, of course: when we hold a heavy weight and get tired, we begin to shake. The reason is that the volleys are coming irregularly, and the muscle is tired and not reacting fast enough. Why such an inefficient scheme? We do not know exactly why, but evolution has not been able to develop fast smooth muscle. Smooth muscle would be much more effective for holding up weights because you could just stand there and it would lock in; there would be no work involved and no energy would be required. However, it has the disadvantage that it is very slow-operating. Returning now to physics, we may ask why we want to calculate the work done. The answer is that it is interesting and useful to do so, since the work done on a particle by the resultant of all the forces acting on it is exactly equal to the change in kinetic energy of that particle. That is, if an object is being pushed, it picks up speed, and \begin{equation*} \Delta(v^2)=\frac{2}{m}\,\FLPF\cdot\Delta\FLPs. \end{equation*} |
|
1 | 14 | Work and Potential Energy (conclusion) | 2 | Constrained motion | Another interesting feature of forces and work is this: suppose that we have a sloping or a curved track, and a particle that must move along the track, but without friction. Or we may have a pendulum with a string and a weight; the string constrains the weight to move in a circle about the pivot point. The pivot point may be changed by having the string hit a peg, so that the path of the weight is along two circles of different radii. These are examples of what we call fixed, frictionless constraints. In motion with a fixed frictionless constraint, no work is done by the constraint because the forces of constraint are always at right angles to the motion. By the “forces of constraint” we mean those forces which are applied to the object directly by the constraint itself—the contact force with the track, or the tension in the string. The forces involved in the motion of a particle on a slope moving under the influence of gravity are quite complicated, since there is a constraint force, a gravitational force, and so on. However, if we base our calculation of the motion on conservation of energy and the gravitational force alone, we get the right result. This seems rather strange, because it is not strictly the right way to do it—we should use the resultant force. Nevertheless, the work done by the gravitational force alone will turn out to be the change in the kinetic energy, because the work done by the constraint part of the force is zero (Fig. 14–1). The important feature here is that if a force can be analyzed as the sum of two or more “pieces” then the work done by the resultant force in going along a certain curve is the sum of the works done by the various “component” forces into which the force is analyzed. Thus if we analyze the force as being the vector sum of several effects, gravitational plus constraint forces, etc., or the $x$-component of all forces and the $y$-component of all forces, or any other way that we wish to split it up, then the work done by the net force is equal to the sum of the works done by all the parts into which we have divided the force in making the analysis. |
|
1 | 14 | Work and Potential Energy (conclusion) | 3 | Conservative forces | In nature there are certain forces, that of gravity, for example, which have a very remarkable property which we call “conservative” (no political ideas involved, it is again one of those “crazy words”). If we calculate how much work is done by a force in moving an object from one point to another along some curved path, in general the work depends upon the curve, but in special cases it does not. If it does not depend upon the curve, we say that the force is a conservative force. In other words, if the integral of the force times the distance in going from position $1$ to position $2$ in Fig. 14–2 is calculated along curve $A$ and then along $B$, we get the same number of joules, and if this is true for this pair of points on every curve, and if the same proposition works no matter which pair of points we use, then we say the force is conservative. In such circumstances, the work integral going from $1$ to $2$ can be evaluated in a simple manner, and we can give a formula for the result. Ordinarily it is not this easy, because we also have to specify the curve, but when we have a case where the work does not depend on the curve, then, of course, the work depends only upon the positions of $1$ and $2$. To demonstrate this idea, consider the following. We take a “standard” point $P$, at an arbitrary location (Fig. 14–2). Then, the work line-integral from $1$ to $2$, which we want to calculate, can be evaluated as the work done in going from $1$ to $P$ plus the work done in going from $P$ to $2$, because the forces are conservative and the work does not depend upon the curve. Now, the work done in going from position $P$ to a particular position in space is a function of that position in space. Of course it really depends on $P$ also, but we hold the arbitrary point $P$ fixed permanently for the analysis. If that is done, then the work done in going from point $P$ to point $2$ is some function of the final position of $2$. It depends upon where $2$ is; if we go to some other point we get a different answer. We shall call this function of position $-U(x,y,z)$, and when we wish to refer to some particular point $2$ whose coordinates are $(x_2,y_2,z_2)$, we shall write $U(2)$, as an abbreviation for $U(x_2,y_2,z_2)$. The work done in going from point $1$ to point $P$ can be written also by going the other way along the integral, reversing all the $d\FLPs$’s. That is, the work done in going from $1$ to $P$ is minus the work done in going from the point $P$ to $1$: \begin{equation*} \int_1^P\FLPF\cdot d\FLPs=\int_P^1\FLPF\cdot(-d\FLPs)= -\int_P^1\FLPF\cdot d\FLPs. \end{equation*} Thus the work done in going from $P$ to $1$ is $-U(1)$, and from $P$ to $2$ the work is $-U(2)$. Therefore the integral from $1$ to $2$ is equal to $-U(2)$ plus [$-U(1)$ backwards], or $+U(1) - U(2)$: \begin{gather} U(1)=-\int_P^1\FLPF\cdot d\FLPs,\quad U(2)=-\int_P^2\FLPF\cdot d\FLPs,\notag\\[1.5ex] \label{Eq:I:14:1} \int_1^2\FLPF\cdot d\FLPs=U(1)-U(2). \end{gather} The quantity $U(2) - U(1)$ is called the change in the potential energy, and we call $U$ the potential energy. We shall say that when the object is located at position $2$, it has potential energy $U(2)$ and at position $1$ it has potential energy $U(1)$. If it is located at position $P$, it has zero potential energy. If we had used any other point, say $Q$, instead of $P$, it would turn out (and we shall leave it to you to demonstrate) that the potential energy is changed only by the addition of a constant. Since the conservation of energy depends only upon changes, it does not matter if we add a constant to the potential energy. Thus the point $P$ is arbitrary. Now, we have the following two propositions: (1) that the work done by a force is equal to the change in kinetic energy of the particle, but (2) mathematically, for a conservative force, the work done is minus the change in a function $U$ which we call the potential energy. As a consequence of these two, we arrive at the proposition that if only conservative forces act, the kinetic energy $T$ plus the potential energy $U$ remains constant: \begin{equation} \label{Eq:I:14:2} T+U=\text{constant}. \end{equation} Let us now discuss the formulas for the potential energy for a number of cases. If we have a gravitational field that is uniform, if we are not going to heights comparable with the radius of the earth, then the force is a constant vertical force and the work done is simply the force times the vertical distance. Thus \begin{equation} \label{Eq:I:14:3} U(z)=mgz, \end{equation} and the point $P$ which corresponds to zero potential energy happens to be any point in the plane $z=0$. We could also have said that the potential energy is $mg(z - 6)$ if we had wanted to—all the results would, of course, be the same in our analysis except that the value of the potential energy at $z = 0$ would be $-6mg$. It makes no difference, because only differences in potential energy count. The energy needed to compress a linear spring a distance $x$ from an equilibrium point is \begin{equation} \label{Eq:I:14:4} U(x)=\tfrac{1}{2}kx^2, \end{equation} and the zero of potential energy is at the point $x=0$, the equilibrium position of the spring. Again we could add any constant we wish. The potential energy of gravitation for point masses $M$ and $m$, a distance $r$ apart, is \begin{equation} \label{Eq:I:14:5} U(r) = -GMm/r. \end{equation} The constant has been chosen here so that the potential is zero at infinity. Of course the same formula applies to electrical charges, because it is the same law: \begin{equation} \label{Eq:I:14:6} U(r) = q_1q_2/4\pi\epsO r. \end{equation} Now let us actually use one of these formulas, to see whether we understand what it means. Question: How fast do we have to shoot a rocket away from the earth in order for it to leave? Solution: The kinetic plus potential energy must be a constant; when it “leaves,” it will be millions of miles away, and if it is just barely able to leave, we may suppose that it is moving with zero speed out there, just barely going. Let $a$ be the radius of the earth, and $M$ its mass. The kinetic plus potential energy is then initially given by $\tfrac{1}{2}mv^2 - GmM/a$. At the end of the motion the two energies must be equal. The kinetic energy is taken to be zero at the end of the motion, because it is supposed to be just barely drifting away at essentially zero speed, and the potential energy is $GmM$ divided by infinity, which is zero. So everything is zero on one side and that tells us that the square of the velocity must be $2GM/a$. But $GM/a^2$ is what we call the acceleration of gravity, $g$. Thus \begin{equation*} v^2=2ga. \end{equation*} At what speed must a satellite travel in order to keep going around the earth? We worked this out long ago and found that $v^2 = GM/a$. Therefore to go away from the earth, we need $\sqrt{2}$ times the velocity we need to just go around the earth near its surface. We need, in other words, twice as much energy (because energy goes as the square of the velocity) to leave the earth as we do to go around it. Therefore the first thing that was done historically with satellites was to get one to go around the earth, which requires a speed of five miles per second. The next thing was to send a satellite away from the earth permanently; this required twice the energy, or about seven miles per second. Now, continuing our discussion of the characteristics of potential energy, let us consider the interaction of two molecules, or two atoms, two oxygen atoms for instance. When they are very far apart, the force is one of attraction, which varies as the inverse seventh power of the distance, and when they are very close the force is a very large repulsion. If we integrate the inverse seventh power to find the work done, we find that the potential energy $U$, which is a function of the radial distance between the two oxygen atoms, varies as the inverse sixth power of the distance for large distances. If we sketch the curve of the potential energy $U(r)$ as in Fig. 14–3, we thus start out at large $r$ with an inverse sixth power, but if we come in sufficiently near we reach a point $d$ where there is a minimum of potential energy. The minimum of potential energy at $r = d$ means this: if we start at $d$ and move a small distance, a very small distance, the work done, which is the change in potential energy when we move this distance, is nearly zero, because there is very little change in potential energy at the bottom of the curve. Thus there is no force at this point, and so it is the equilibrium point. Another way to see that it is the equilibrium point is that it takes work to move away from $d$ in either direction. When the two oxygen atoms have settled down, so that no more energy can be liberated from the force between them, they are in the lowest energy state, and they will be at this separation $d$. This is the way an oxygen molecule looks when it is cold. When we heat it up, the atoms shake and move farther apart, and we can in fact break them apart, but to do so takes a certain amount of work or energy, which is the potential energy difference between $r = d$ and $r=\infty$. When we try to push the atoms very close together the energy goes up very rapidly, because they repel each other. The reason we bring this out is that the idea of force is not particularly suitable for quantum mechanics; there the idea of energy is most natural. We find that although forces and velocities “dissolve” and disappear when we consider the more advanced forces between nuclear matter and between molecules and so on, the energy concept remains. Therefore we find curves of potential energy in quantum mechanics books, but very rarely do we ever see a curve for the force between two molecules, because by that time people who are doing analyses are thinking in terms of energy rather than of force. Next we note that if several conservative forces are acting on an object at the same time, then the potential energy of the object is the sum of the potential energies from each of the separate forces. This is the same proposition that we mentioned before, because if the force can be represented as a vector sum of forces, then the work done by the total force is the sum of the works done by the partial forces, and it can therefore be analyzed as changes in the potential energies of each of them separately. Thus the total potential energy is the sum of all the little pieces. We could generalize this to the case of a system of many objects interacting with one another, like Jupiter, Saturn, Uranus, etc., or oxygen, nitrogen, carbon, etc., which are acting with respect to one another in pairs due to forces all of which are conservative. In these circumstances the kinetic energy in the entire system is simply the sum of the kinetic energies of all of the particular atoms or planets or whatever, and the potential energy of the system is the sum, over the pairs of particles, of the potential energy of mutual interaction of a single pair, as though the others were not there. (This is really not true for molecular forces, and the formula is somewhat more complicated; it certainly is true for Newtonian gravitation, and it is true as an approximation for molecular forces. For molecular forces there is a potential energy, but it is sometimes a more complicated function of the positions of the atoms than simply a sum of terms from pairs.) In the special case of gravity, therefore, the potential energy is the sum, over all the pairs $i$ and $j$, of $-Gm_im_j/r_{ij}$, as was indicated in Eq. (13.14). Equation (13.14) expressed mathematically the following proposition: that the total kinetic energy plus the total potential energy does not change with time. As the various planets wheel about, and turn and twist and so on, if we calculate the total kinetic energy and the total potential energy we find that the total remains constant. |
|
1 | 14 | Work and Potential Energy (conclusion) | 4 | Nonconservative forces | We have spent a considerable time discussing conservative forces; what about nonconservative forces? We shall take a deeper view of this than is usual, and state that there are no nonconservative forces! As a matter of fact, all the fundamental forces in nature appear to be conservative. This is not a consequence of Newton’s laws. In fact, so far as Newton himself knew, the forces could be nonconservative, as friction apparently is. When we say friction apparently is, we are taking a modern view, in which it has been discovered that all the deep forces, the forces between the particles at the most fundamental level, are conservative. If, for example, we analyze a system like that great globular star cluster that we saw a picture of, with the thousands of stars all interacting, then the formula for the total potential energy is simply one term plus another term, etc., summed over all pairs of stars, and the kinetic energy is the sum of the kinetic energies of all the individual stars. But the globular cluster as a whole is drifting in space too, and, if we were far enough away from it and did not see the details, could be thought of as a single object. Then if forces were applied to it, some of those forces might end up driving it forward as a whole, and we would see the center of the whole thing moving. On the other hand, some of the forces can be, so to speak, “wasted” in increasing the kinetic or potential energy of the “particles” inside. Let us suppose, for instance, that the action of these forces expands the whole cluster and makes the particles move faster. The total energy of the whole thing is really conserved, but seen from the outside with our crude eyes which cannot see the confusion of motions inside, and just thinking of the kinetic energy of the motion of the whole object as though it were a single particle, it would appear that energy is not conserved, but this is due to a lack of appreciation of what it is that we see. And that, it turns out, is the case: the total energy of the world, kinetic plus potential, is a constant when we look closely enough. When we study matter in the finest detail at the atomic level, it is not always easy to separate the total energy of a thing into two parts, kinetic energy and potential energy, and such separation is not always necessary. It is almost always possible to do it, so let us say that it is always possible, and that the potential-plus-kinetic energy of the world is constant. Thus the total potential-plus-kinetic energy inside the whole world is constant, and if the “world” is a piece of isolated material, the energy is constant if there are no external forces. But as we have seen, some of the kinetic and potential energy of a thing may be internal, for instance the internal molecular motions, in the sense that we do not notice it. We know that in a glass of water everything is jiggling around, all the parts are moving all the time, so there is a certain kinetic energy inside, which we ordinarily may not pay any attention to. We do not notice the motion of the atoms, which produces heat, and so we do not call it kinetic energy, but heat is primarily kinetic energy. Internal potential energy may also be in the form, for instance, of chemical energy: when we burn gasoline energy is liberated because the potential energies of the atoms in the new atomic arrangement are lower than in the old arrangement. It is not strictly possible to treat heat as being pure kinetic energy, for a little of the potential gets in, and vice versa for chemical energy, so we put the two together and say that the total kinetic and potential energy inside an object is partly heat, partly chemical energy, and so on. Anyway, all these different forms of internal energy are sometimes considered as “lost” energy in the sense described above; this will be made clearer when we study thermodynamics. As another example, when friction is present it is not true that kinetic energy is lost, even though a sliding object stops and the kinetic energy seems to be lost. The kinetic energy is not lost because, of course, the atoms inside are jiggling with a greater amount of kinetic energy than before, and although we cannot see that, we can measure it by determining the temperature. Of course if we disregard the heat energy, then the conservation of energy theorem will appear to be false. Another situation in which energy conservation appears to be false is when we study only part of a system. Naturally, the conservation of energy theorem will appear not to be true if something is interacting with something else on the outside and we neglect to take that interaction into account. In classical physics potential energy involved only gravitation and electricity, but now we have nuclear energy and other energies also. Light, for example, would involve a new form of energy in the classical theory, but we can also, if we want to, imagine that the energy of light is the kinetic energy of a photon, and then our formula (14.2) would still be right. |
|
1 | 14 | Work and Potential Energy (conclusion) | 5 | Potentials and fields | We shall now discuss a few of the ideas associated with potential energy and with the idea of a field. Suppose we have two large objects $A$ and $B$ and a third very small one which is attracted gravitationally by the two, with some resultant force $\FLPF$. We have already noted in Chapter 12 that the gravitational force on a particle can be written as its mass, $m$, times another vector, $\FLPC$, which is dependent only upon the position of the particle: \begin{equation*} \FLPF=m\FLPC. \end{equation*} We can analyze gravitation, then, by imagining that there is a certain vector $\FLPC$ at every position in space which “acts” upon a mass which we may place there, but which is there itself whether we actually supply a mass for it to “act” on or not. $\FLPC$ has three components, and each of those components is a function of $(x,y,z)$, a function of position in space. Such a thing we call a field, and we say that the objects $A$ and $B$ generate the field, i.e., they “make” the vector $\FLPC$. When an object is put in a field, the force on it is equal to its mass times the value of the field vector at the point where the object is put. We can also do the same with the potential energy. Since the potential energy, the integral of $(-\textbf{force})\cdot(d\FLPs)$ can be written as $m$ times the integral of $(-\textbf{field})\cdot(d\FLPs)$, a mere change of scale, we see that the potential energy $U(x,y,z)$ of an object located at a point $(x,y,z)$ in space can be written as $m$ times another function which we may call the potential $\Psi$. The integral $\int\FLPC\cdot d\FLPs=-\Psi$, just as $\int\FLPF\cdot d\FLPs=-U$; there is only a scale factor between the two: \begin{equation} \label{Eq:I:14:7} U=-\int\FLPF\cdot d\FLPs=-m\int\FLPC\cdot d\FLPs=m\Psi. \end{equation}
\begin{align} \label{Eq:I:14:7} U&=-\int\FLPF\cdot d\FLPs\notag\\ &=-m\int\FLPC\cdot d\FLPs=m\Psi. \end{align} By having this function $\Psi(x,y,z)$ at every point in space, we can immediately calculate the potential energy of an object at any point in space, namely, $U(x, y, z) = m\Psi(x, y, z)$—rather a trivial business, it seems. But it is not really trivial, because it is sometimes much nicer to describe the field by giving the value of $\Psi$ everywhere in space instead of having to give $\FLPC$. Instead of having to write three complicated components of a vector function, we can give instead the scalar function $\Psi$. Furthermore, it is much easier to calculate $\Psi$ than any given component of $\FLPC$ when the field is produced by a number of masses, for since the potential is a scalar we merely add, without worrying about direction. Also, the field $\FLPC$ can be recovered easily from $\Psi$, as we shall shortly see. Suppose we have point masses $m_1$, $m_2$, … at the points $1$, $2$, … and we wish to know the potential $\Psi$ at some arbitrary point $p$. This is simply the sum of the potentials at $p$ due to the individual masses taken one by one: \begin{equation} \label{Eq:I:14:8} \Psi(p)=\sum_i-\frac{Gm_i}{r_{ip}},\quad i=\text{$1$, $2$, $\ldots$} \end{equation} In the last chapter we used this formula, that the potential is the sum of the potentials from all the different objects, to calculate the potential due to a spherical shell of matter by adding the contributions to the potential at a point from all parts of the shell. The result of this calculation is shown graphically in Fig. 14–4. It is negative, having the value zero at $r = \infty$ and varying as $1/r$ down to the radius $a$, and then is constant inside the shell. Outside the shell the potential is $-Gm/r$, where $m$ is the mass of the shell, which is exactly the same as it would have been if all the mass were located at the center. But it is not everywhere exactly the same, for inside the shell the potential turns out to be $-Gm/a$, and is a constant! When the potential is constant, there is no field, or when the potential energy is constant there is no force, because if we move an object from one place to another anywhere inside the sphere the work done by the force is exactly zero. Why? Because the work done in moving the object from one place to the other is equal to minus the change in the potential energy (or, the corresponding field integral is the change of the potential). But the potential energy is the same at any two points inside, so there is zero change in potential energy, and therefore no work is done in going between any two points inside the shell. The only way the work can be zero for all directions of displacement is that there is no force at all. This gives us a clue as to how we can obtain the force or the field, given the potential energy. Let us suppose that the potential energy of an object is known at the position $(x,y,z)$ and we want to know what the force on the object is. It will not do to know the potential at only this one point, as we shall see; it requires knowledge of the potential at neighboring points as well. Why? How can we calculate the $x$-component of the force? (If we can do this, of course, we can also find the $y$- and $z$-components, and we will then know the whole force.) Now, if we were to move the object a small distance $\Delta x$, the work done by the force on the object would be the $x$-component of the force times $\Delta x$, if $\Delta x$ is sufficiently small, and this should equal the change in potential energy in going from one point to the other: \begin{equation} \label{Eq:I:14:9} \Delta W=-\Delta U=F_x\,\Delta x. \end{equation} We have merely used the formula $\int\FLPF\cdot d\FLPs=-\Delta U$, but for a very short path. Now we divide by $\Delta x$ and so find that the force is \begin{equation} \label{Eq:I:14:10} F_x=-\Delta U/\Delta x. \end{equation} Of course this is not exact. What we really want is the limit of (14.10) as $\Delta x$ gets smaller and smaller, because it is only exactly right in the limit of infinitesimal $\Delta x$. This we recognize as the derivative of $U$ with respect to $x$, and we would be inclined, therefore, to write $-dU/dx$. But $U$ depends on $x$, $y$, and $z$, and the mathematicians have invented a different symbol to remind us to be very careful when we are differentiating such a function, so as to remember that we are considering that only $x$ varies, and $y$ and $z$ do not vary. Instead of a $d$ they simply make a “backwards $6$,” or $\partial$. (A $\partial$ should have been used in the beginning of calculus because we always want to cancel that $d$, but we never want to cancel a $\partial$!) So they write $\ddpl{U}{x}$, and furthermore, in moments of duress, if they want to be very careful, they put a line beside it with a little $yz$ at the bottom ($\ddpl{U}{x}|_{yz}$), which means “Take the derivative of $U$ with respect to $x$, keeping $y$ and $z$ constant.” Most often we leave out the remark about what is kept constant because it is usually evident from the context, so we usually do not use the line with the $y$ and $z$. However, always use a $\partial$ instead of a $d$ as a warning that it is a derivative with some other variables kept constant. This is called a partial derivative; it is a derivative in which we vary only $x$. Therefore, we find that the force in the $x$-direction is minus the partial derivative of $U$ with respect to $x$: \begin{equation} \label{Eq:I:14:11} F_x=-\ddpl{U}{x}. \end{equation} In a similar way, the force in the $y$-direction can be found by differentiating $U$ with respect to $y$, keeping $x$ and $z$ constant, and the third component, of course, is the derivative with respect to $z$, keeping $y$ and $x$ constant: \begin{equation} \label{Eq:I:14:12} F_y=-\ddpl{U}{y},\quad F_z=-\ddpl{U}{z}. \end{equation} This is the way to get from the potential energy to the force. We get the field from the potential in exactly the same way: \begin{equation} \label{Eq:I:14:13} C_x=-\ddpl{\Psi}{x},\quad C_y=-\ddpl{\Psi}{y},\quad C_z=-\ddpl{\Psi}{z}. \end{equation}
\begin{equation} \begin{gathered} C_x=-\ddpl{\Psi}{x},\\[.5ex] C_y=-\ddpl{\Psi}{y},\\[.5ex] C_z=-\ddpl{\Psi}{z}. \end{gathered} \label{Eq:I:14:13} \end{equation}
Incidentally, we shall mention here another notation, which we shall not actually use for quite a while: Since $\FLPC$ is a vector and has $x$-, $y$-, and $z$-components, the symbolized $\ddpl{}{x}$, $\ddpl{}{y}$, and $\ddpl{}{z}$ which produce the $x$-, $y$-, and $z$-components are something like vectors. The mathematicians have invented a glorious new symbol, $\FLPnabla$, called “grad” or “gradient”, which is not a quantity but an operator that makes a vector from a scalar. It has the following “components”: The $x$-component of this “grad” is $\ddpl{}{x}$, the $y$-component is $\ddpl{}{y}$, and the $z$-component is $\ddpl{}{z}$, and then we have the fun of writing our formulas this way: \begin{equation} \label{Eq:I:14:14} \FLPF=-\FLPgrad{U},\quad \FLPC=-\FLPgrad{\Psi}. \end{equation} Using $\FLPnabla$ gives us a quick way of testing whether we have a real vector equation or not, but actually Eqs. (14.14) mean precisely the same as Eqs. (14.11), (14.12) and (14.13); it is just another way of writing them, and since we do not want to write three equations every time, we just write $\FLPgrad{U}$ instead. One more example of fields and potentials has to do with the electrical case. In the case of electricity the force on a stationary object is the charge times the electric field: $\FLPF = q\FLPE$. (In general, of course, the $x$-component of force in an electrical problem has also a part which depends on the magnetic field. It is easy to show from Eq. (12.11) that the force on a particle due to magnetic fields is always at right angles to its velocity, and also at right angles to the field. Since the force due to magnetism on a moving charge is at right angles to the velocity, no work is done by the magnetism on the moving charge because the motion is at right angles to the force. Therefore, in calculating theorems of kinetic energy in electric and magnetic fields we can disregard the contribution from the magnetic field, since it does not change the kinetic energy.) We suppose that there is only an electric field. Then we can calculate the energy, or work done, in the same way as for gravity, and calculate a quantity $\phi$ which is minus the integral of $\FLPE\cdot d\FLPs$, from the arbitrary fixed point to the point where we make the calculation, and then the potential energy in an electric field is just charge times this quantity $\phi$: \begin{gather*} \phi(\FLPr)=-\int\FLPE\cdot d\FLPs,\\[1ex] U=q\phi. \end{gather*} Let us take, as an example, the case of two parallel metal plates, each with a surface charge of $\pm\sigma$ per unit area. This is called a parallel-plate capacitor. We found previously that there is zero force outside the plates and that there is a constant electric field between them, directed from $+$ to $-$ and of magnitude $\sigma/\epsO$ (Fig. 14–5). We would like to know how much work would be done in carrying a charge from one plate to the other. The work would be the $(\textbf{force})\cdot(d\FLPs)$ integral, which can be written as charge times the potential value at plate $1$ minus that at plate $2$: \begin{equation*} W=\int_1^2\FLPF\cdot d\FLPs=q(\phi_1-\phi_2). \end{equation*} We can actually work out the integral because the force is constant, and if we call the separation of the plates $d$, then the integral is easy: \begin{equation*} \int_1^2\FLPF\cdot d\FLPs=\frac{q\sigma}{\epsO}\int_1^2dx= \frac{q\sigma d}{\epsO}. \end{equation*} The difference in potential, $\Delta\phi=\sigma d/\epsO$, is called the voltage difference, and $\phi$ is measured in volts. When we say a pair of plates is charged to a certain voltage, what we mean is that the difference in electrical potential of the two plates is so-and-so many volts. For a capacitor made of two parallel plates carrying a surface charge $\pm\sigma$, the voltage, or difference in potential, of the pair of plates is $\sigma d/\epsO$. |
|
1 | 15 | The Special Theory of Relativity | 1 | The principle of relativity | For over 200 years the equations of motion enunciated by Newton were believed to describe nature correctly, and the first time that an error in these laws was discovered, the way to correct it was also discovered. Both the error and its correction were discovered by Einstein in 1905. Newton’s Second Law, which we have expressed by the equation \begin{equation*} F=d(mv)/dt, \end{equation*} was stated with the tacit assumption that $m$ is a constant, but we now know that this is not true, and that the mass of a body increases with velocity. In Einstein’s corrected formula $m$ has the value \begin{equation} \label{Eq:I:15:1} m=\frac{m_0}{\sqrt{1-v^2/c^2}}, \end{equation} where the “rest mass” $m_0$ represents the mass of a body that is not moving and $c$ is the speed of light, which is about $3\times10^5$ $\text{km}\cdot\text{sec}^{-1}$ or about $186{,}000$ $\text{mi}\cdot\text{sec}^{-1}.$ For those who want to learn just enough about it so they can solve problems, that is all there is to the theory of relativity—it just changes Newton’s laws by introducing a correction factor to the mass. From the formula itself it is easy to see that this mass increase is very small in ordinary circumstances. If the velocity is even as great as that of a satellite, which goes around the earth at $5$ mi/sec, then $v/c = 5/186{,}000$: putting this value into the formula shows that the correction to the mass is only one part in two to three billion, which is nearly impossible to observe. Actually, the correctness of the formula has been amply confirmed by the observation of many kinds of particles, moving at speeds ranging up to practically the speed of light. However, because the effect is ordinarily so small, it seems remarkable that it was discovered theoretically before it was discovered experimentally. Empirically, at a sufficiently high velocity, the effect is very large, but it was not discovered that way. Therefore it is interesting to see how a law that involved so delicate a modification (at the time when it was first discovered) was brought to light by a combination of experiments and physical reasoning. Contributions to the discovery were made by a number of people, the final result of whose work was Einstein’s discovery. There are really two Einstein theories of relativity. This chapter is concerned with the Special Theory of Relativity, which dates from 1905. In 1915 Einstein published an additional theory, called the General Theory of Relativity. This latter theory deals with the extension of the Special Theory to the case of the law of gravitation; we shall not discuss the General Theory here. The principle of relativity was first stated by Newton, in one of his corollaries to the laws of motion: “The motions of bodies included in a given space are the same among themselves, whether that space is at rest or moves uniformly forward in a straight line.” This means, for example, that if a space ship is drifting along at a uniform speed, all experiments performed in the space ship and all the phenomena in the space ship will appear the same as if the ship were not moving, provided, of course, that one does not look outside. That is the meaning of the principle of relativity. This is a simple enough idea, and the only question is whether it is true that in all experiments performed inside a moving system the laws of physics will appear the same as they would if the system were standing still. Let us first investigate whether Newton’s laws appear the same in the moving system. Suppose that Moe is moving in the $x$-direction with a uniform velocity $u$, and he measures the position of a certain point, shown in Fig. 15–1. He designates the “$x$-distance” of the point in his coordinate system as $x'$. Joe is at rest, and measures the position of the same point, designating its $x$-coordinate in his system as $x$. The relationship of the coordinates in the two systems is clear from the diagram. After time $t$ Moe’s origin has moved a distance $ut$, and if the two systems originally coincided, \begin{equation} \begin{aligned} x'&=x-ut,\\ y'&=y,\\ z'&=z,\\ t'&=t. \end{aligned} \label{Eq:I:15:2} \end{equation} If we substitute this transformation of coordinates into Newton’s laws we find that these laws transform to the same laws in the primed system; that is, the laws of Newton are of the same form in a moving system as in a stationary system, and therefore it is impossible to tell, by making mechanical experiments, whether the system is moving or not. The principle of relativity has been used in mechanics for a long time. It was employed by various people, in particular Huygens, to obtain the rules for the collision of billiard balls, in much the same way as we used it in Chapter 10 to discuss the conservation of momentum. In the 19th century interest in it was heightened as the result of investigations into the phenomena of electricity, magnetism, and light. A long series of careful studies of these phenomena by many people culminated in Maxwell’s equations of the electromagnetic field, which describe electricity, magnetism, and light in one uniform system. However, the Maxwell equations did not seem to obey the principle of relativity. That is, if we transform Maxwell’s equations by the substitution of equations (15.2), their form does not remain the same; therefore, in a moving space ship the electrical and optical phenomena should be different from those in a stationary ship. Thus one could use these optical phenomena to determine the speed of the ship; in particular, one could determine the absolute speed of the ship by making suitable optical or electrical measurements. One of the consequences of Maxwell’s equations is that if there is a disturbance in the field such that light is generated, these electromagnetic waves go out in all directions equally and at the same speed $c$, or $186{,}000$ mi/sec. Another consequence of the equations is that if the source of the disturbance is moving, the light emitted goes through space at the same speed $c$. This is analogous to the case of sound, the speed of sound waves being likewise independent of the motion of the source. This independence of the motion of the source, in the case of light, brings up an interesting problem: Suppose we are riding in a car that is going at a speed $u$, and light from the rear is going past the car with speed $c$. Differentiating the first equation in (15.2) gives \begin{equation*} dx'/dt=dx/dt-u, \end{equation*} which means that according to the Galilean transformation the apparent speed of the passing light, as we measure it in the car, should not be $c$ but should be $c-u$. For instance, if the car is going $100{,}000$ mi/sec, and the light is going $186{,}000$ mi/sec, then apparently the light going past the car should go $86{,}000$ mi/sec. In any case, by measuring the speed of the light going past the car (if the Galilean transformation is correct for light), one could determine the speed of the car. A number of experiments based on this general idea were performed to determine the velocity of the earth, but they all failed—they gave no velocity at all. We shall discuss one of these experiments in detail, to show exactly what was done and what was the matter; something was the matter, of course, something was wrong with the equations of physics. What could it be? |
|
1 | 15 | The Special Theory of Relativity | 2 | The Lorentz transformation | When the failure of the equations of physics in the above case came to light, the first thought that occurred was that the trouble must lie in the new Maxwell equations of electrodynamics, which were only 20 years old at the time. It seemed almost obvious that these equations must be wrong, so the thing to do was to change them in such a way that under the Galilean transformation the principle of relativity would be satisfied. When this was tried, the new terms that had to be put into the equations led to predictions of new electrical phenomena that did not exist at all when tested experimentally, so this attempt had to be abandoned. Then it gradually became apparent that Maxwell’s laws of electrodynamics were correct, and the trouble must be sought elsewhere. In the meantime, H. A. Lorentz noticed a remarkable and curious thing when he made the following substitutions in the Maxwell equations: \begin{equation} \begin{aligned} x'&=\frac{x-ut}{\sqrt{1-u^2/c^2}},\\ y'&=y,\\[2ex] z'&=z,\\ t'&=\frac{t-ux/c^2}{\sqrt{1-u^2/c^2}}, \end{aligned} \label{Eq:I:15:3} \end{equation} namely, Maxwell’s equations remain in the same form when this transformation is applied to them! Equations (15.3) are known as a Lorentz transformation. Einstein, following a suggestion originally made by Poincaré, then proposed that all the physical laws should be of such a kind that they remain unchanged under a Lorentz transformation. In other words, we should change, not the laws of electrodynamics, but the laws of mechanics. How shall we change Newton’s laws so that they will remain unchanged by the Lorentz transformation? If this goal is set, we then have to rewrite Newton’s equations in such a way that the conditions we have imposed are satisfied. As it turned out, the only requirement is that the mass $m$ in Newton’s equations must be replaced by the form shown in Eq. (15.1). When this change is made, Newton’s laws and the laws of electrodynamics will harmonize. Then if we use the Lorentz transformation in comparing Moe’s measurements with Joe’s, we shall never be able to detect whether either is moving, because the form of all the equations will be the same in both coordinate systems! It is interesting to discuss what it means that we replace the old transformation between the coordinates and time with a new one, because the old one (Galilean) seems to be self-evident, and the new one (Lorentz) looks peculiar. We wish to know whether it is logically and experimentally possible that the new, and not the old, transformation can be correct. To find that out, it is not enough to study the laws of mechanics but, as Einstein did, we too must analyze our ideas of space and time in order to understand this transformation. We shall have to discuss these ideas and their implications for mechanics at some length, so we say in advance that the effort will be justified, since the results agree with experiment. |
|
1 | 15 | The Special Theory of Relativity | 3 | The Michelson-Morley experiment | As mentioned above, attempts were made to determine the absolute velocity of the earth through the hypothetical “ether” that was supposed to pervade all space. The most famous of these experiments is one performed by Michelson and Morley in 1887. It was 18 years later before the negative results of the experiment were finally explained, by Einstein. The Michelson-Morley experiment was performed with an apparatus like that shown schematically in Fig. 15–2. This apparatus is essentially comprised of a light source $A$, a partially silvered glass plate $B$, and two mirrors $C$ and $E$, all mounted on a rigid base. The mirrors are placed at equal distances $L$ from $B$. The plate $B$ splits an oncoming beam of light, and the two resulting beams continue in mutually perpendicular directions to the mirrors, where they are reflected back to $B$. On arriving back at $B$, the two beams are recombined as two superposed beams, $D$ and $F$. If the time taken for the light to go from $B$ to $E$ and back is the same as the time from $B$ to $C$ and back, the emerging beams $D$ and $F$ will be in phase and will reinforce each other, but if the two times differ slightly, the beams will be slightly out of phase and interference will result. If the apparatus is “at rest” in the ether, the times should be precisely equal, but if it is moving toward the right with a velocity $u$, there should be a difference in the times. Let us see why. First, let us calculate the time required for the light to go from $B$ to $E$ and back. Let us say that the time for light to go from plate $B$ to mirror $E$ is $t_1$, and the time for the return is $t_2$. Now, while the light is on its way from $B$ to the mirror, the apparatus moves a distance $ut_1$, so the light must traverse a distance $L+ut_1$, at the speed $c$. We can also express this distance as $ct_1$, so we have \begin{equation*} ct_1=L+ut_1,\quad \text{or}\quad t_1=L/(c-u). \end{equation*} (This result is also obvious from the point of view that the velocity of light relative to the apparatus is $c - u$, so the time is the length $L$ divided by $c - u$.) In a like manner, the time $t_2$ can be calculated. During this time the plate $B$ advances a distance $ut_2$, so the return distance of the light is $L - ut_2$. Then we have \begin{equation*} ct_2=L-ut_2,\quad \text{or}\quad t_2=L/(c+u). \end{equation*} Then the total time is \begin{equation*} t_1+t_2=2Lc/(c^2-u^2). \end{equation*} For convenience in later comparison of times we write this as \begin{equation} \label{Eq:I:15:4} t_1+t_2=\frac{2L/c}{1-u^2/c^2}. \end{equation} Our second calculation will be of the time $t_3$ for the light to go from $B$ to the mirror $C$. As before, during time $t_3$ the mirror $C$ moves to the right a distance $ut_3$ to the position $C'$; in the same time, the light travels a distance $ct_3$ along the hypotenuse of a triangle, which is $BC'$. For this right triangle we have \begin{equation*} (ct_3)^2 = L^2 + (ut_3)^2 \end{equation*} or \begin{equation*} L^2 = c^2t_3^2 - u^2t_3^2 = (c^2 - u^2)t_3^2, \end{equation*} from which we get \begin{equation*} t_3=L/\sqrt{c^2-u^2}. \end{equation*} For the return trip from $C'$ the distance is the same, as can be seen from the symmetry of the figure; therefore the return time is also the same, and the total time is $2t_3$. With a little rearrangement of the form we can write \begin{equation} \label{Eq:I:15:5} 2t_3=\frac{2L}{\sqrt{c^2-u^2}}=\frac{2L/c}{\sqrt{1-u^2/c^2}}. \end{equation} We are now able to compare the times taken by the two beams of light. In expressions (15.4) and (15.5) the numerators are identical, and represent the time that would be taken if the apparatus were at rest. In the denominators, the term $u^2/c^2$ will be small, unless $u$ is comparable in size to $c$. The denominators represent the modifications in the times caused by the motion of the apparatus. And behold, these modifications are not the same—the time to go to $C$ and back is a little less than the time to $E$ and back, even though the mirrors are equidistant from $B$, and all we have to do is to measure that difference with precision. Here a minor technical point arises—suppose the two lengths $L$ are not exactly equal? In fact, we surely cannot make them exactly equal. In that case we simply turn the apparatus $90$ degrees, so that $BC$ is in the line of motion and $BE$ is perpendicular to the motion. Any small difference in length then becomes unimportant, and what we look for is a shift in the interference fringes when we rotate the apparatus. In carrying out the experiment, Michelson and Morley oriented the apparatus so that the line $BE$ was nearly parallel to the earth’s motion in its orbit (at certain times of the day and night). This orbital speed is about $18$ miles per second, and any “ether drift” should be at least that much at some time of the day or night and at some time during the year. The apparatus was amply sensitive to observe such an effect, but no time difference was found—the velocity of the earth through the ether could not be detected. The result of the experiment was null. The result of the Michelson-Morley experiment was very puzzling and most disturbing. The first fruitful idea for finding a way out of the impasse came from Lorentz. He suggested that material bodies contract when they are moving, and that this foreshortening is only in the direction of the motion, and also, that if the length is $L_0$ when a body is at rest, then when it moves with speed $u$ parallel to its length, the new length, which we call $L_\parallel$ ($L$-parallel), is given by \begin{equation} \label{Eq:I:15:6} L_\parallel=L_0\sqrt{1-u^2/c^2}. \end{equation} When this modification is applied to the Michelson-Morley interferometer apparatus the distance from $B$ to $C$ does not change, but the distance from $B$ to $E$ is shortened to $L\sqrt{1 - u^2/c^2}$. Therefore Eq. (15.5) is not changed, but the $L$ of Eq. (15.4) must be changed in accordance with Eq. (15.6). When this is done we obtain \begin{equation} \label{Eq:I:15:7} t_1+t_2=\frac{(2L/c)\sqrt{1-u^2/c^2}}{1-u^2/c^2}= \frac{2L/c}{\sqrt{1-u^2/c^2}}. \end{equation} Comparing this result with Eq. (15.5), we see that $t_1 + t_2 = 2t_3$. So if the apparatus shrinks in the manner just described, we have a way of understanding why the Michelson-Morley experiment gives no effect at all. Although the contraction hypothesis successfully accounted for the negative result of the experiment, it was open to the objection that it was invented for the express purpose of explaining away the difficulty, and was too artificial. However, in many other experiments to discover an ether wind, similar difficulties arose, until it appeared that nature was in a “conspiracy” to thwart man by introducing some new phenomenon to undo every phenomenon that he thought would permit a measurement of $u$. It was ultimately recognized, as Poincaré pointed out, that a complete conspiracy is itself a law of nature! Poincaré then proposed that there is such a law of nature, that it is not possible to discover an ether wind by any experiment; that is, there is no way to determine an absolute velocity. |
|
1 | 15 | The Special Theory of Relativity | 4 | Transformation of time | In checking out whether the contraction idea is in harmony with the facts in other experiments, it turns out that everything is correct provided that the times are also modified, in the manner expressed in the fourth equation of the set (15.3). That is because the time $2t_3$, calculated for the trip from $B$ to $C$ and back, is not the same when calculated by a man performing the experiment in a moving space ship as when calculated by a stationary observer who is watching the space ship. To the man in the ship the time is simply $2L/c$, but to the other observer it is $(2L/c)/\sqrt{ 1 - u^2/c^2}$ (Eq. 15.5). In other words, when the outsider sees the man in the space ship lighting a cigar, all the actions appear to be slower than normal, while to the man inside, everything moves at a normal rate. So not only must the lengths shorten, but also the time-measuring instruments (“clocks”) must apparently slow down. That is, when the clock in the space ship records $1$ second elapsed, as seen by the man in the ship, it shows $1/\sqrt{1 - u^2/c^2}$ second to the man outside. This slowing of the clocks in a moving system is a very peculiar phenomenon, and is worth an explanation. In order to understand this, we have to watch the machinery of the clock and see what happens when it is moving. Since that is rather difficult, we shall take a very simple kind of clock. The one we choose is rather a silly kind of clock, but it will work in principle: it is a rod (meter stick) with a mirror at each end, and when we start a light signal between the mirrors, the light keeps going up and down, making a click every time it comes down, like a standard ticking clock. We build two such clocks, with exactly the same lengths, and synchronize them by starting them together; then they agree always thereafter, because they are the same in length, and light always travels with speed $c$. We give one of these clocks to the man to take along in his space ship, and he mounts the rod perpendicular to the direction of motion of the ship; then the length of the rod will not change. How do we know that perpendicular lengths do not change? The men can agree to make marks on each other’s $y$-meter stick as they pass each other. By symmetry, the two marks must come at the same $y$- and $y'$-coordinates, since otherwise, when they get together to compare results, one mark will be above or below the other, and so we could tell who was really moving. Now let us see what happens to the moving clock. Before the man took it aboard, he agreed that it was a nice, standard clock, and when he goes along in the space ship he will not see anything peculiar. If he did, he would know he was moving—if anything at all changed because of the motion, he could tell he was moving. But the principle of relativity says this is impossible in a uniformly moving system, so nothing has changed. On the other hand, when the external observer looks at the clock going by, he sees that the light, in going from mirror to mirror, is “really” taking a zigzag path, since the rod is moving sidewise all the while. We have already analyzed such a zigzag motion in connection with the Michelson-Morley experiment. If in a given time the rod moves forward a distance proportional to $u$ in Fig. 15–3, the distance the light travels in the same time is proportional to $c$, and the vertical distance is therefore proportional to $\sqrt{c^2 - u^2}$. That is, it takes a longer time for light to go from end to end in the moving clock than in the stationary clock. Therefore the apparent time between clicks is longer for the moving clock, in the same proportion as shown in the hypotenuse of the triangle (that is the source of the square root expressions in our equations). From the figure it is also apparent that the greater $u$ is, the more slowly the moving clock appears to run. Not only does this particular kind of clock run more slowly, but if the theory of relativity is correct, any other clock, operating on any principle whatsoever, would also appear to run slower, and in the same proportion—we can say this without further analysis. Why is this so? To answer the above question, suppose we had two other clocks made exactly alike with wheels and gears, or perhaps based on radioactive decay, or something else. Then we adjust these clocks so they both run in precise synchronism with our first clocks. When light goes up and back in the first clocks and announces its arrival with a click, the new models also complete some sort of cycle, which they simultaneously announce by some doubly coincident flash, or bong, or other signal. One of these clocks is taken into the space ship, along with the first kind. Perhaps this clock will not run slower, but will continue to keep the same time as its stationary counterpart, and thus disagree with the other moving clock. Ah no, if that should happen, the man in the ship could use this mismatch between his two clocks to determine the speed of his ship, which we have been supposing is impossible. We need not know anything about the machinery of the new clock that might cause the effect—we simply know that whatever the reason, it will appear to run slow, just like the first one. Now if all moving clocks run slower, if no way of measuring time gives anything but a slower rate, we shall just have to say, in a certain sense, that time itself appears to be slower in a space ship. All the phenomena there—the man’s pulse rate, his thought processes, the time he takes to light a cigar, how long it takes to grow up and get old—all these things must be slowed down in the same proportion, because he cannot tell he is moving. The biologists and medical men sometimes say it is not quite certain that the time it takes for a cancer to develop will be longer in a space ship, but from the viewpoint of a modern physicist it is nearly certain; otherwise one could use the rate of cancer development to determine the speed of the ship! A very interesting example of the slowing of time with motion is furnished by muons, which are particles that disintegrate spontaneously after an average lifetime of $2.2\times10^{-6}$ sec. They come to the earth in cosmic rays, and can also be produced artificially in the laboratory. Some of them disintegrate in midair, but the remainder disintegrate only after they encounter a piece of material and stop. It is clear that in its short lifetime a muon cannot travel, even at the speed of light, much more than $600$ meters. But although the muons are created at the top of the atmosphere, some $10$ kilometers up, yet they are actually found in a laboratory down here, in cosmic rays. How can that be? The answer is that different muons move at various speeds, some of which are very close to the speed of light. While from their own point of view they live only about $2$ $\mu$sec, from our point of view they live considerably longer—enough longer that they may reach the earth. The factor by which the time is increased has already been given as $1/\sqrt{1 - u^2/c^2}$. The average life has been measured quite accurately for muons of different velocities, and the values agree closely with the formula. We do not know why the muon disintegrates or what its machinery is, but we do know its behavior satisfies the principle of relativity. That is the utility of the principle of relativity—it permits us to make predictions, even about things that otherwise we do not know much about. For example, before we have any idea at all about what makes the muon disintegrate, we can still predict that when it is moving at nine-tenths of the speed of light, the apparent length of time that it lasts is $(2.2\times10^{-6})/\sqrt{1 - 9^2/10^2}$ sec; and our prediction works—that is the good thing about it. |
|
1 | 15 | The Special Theory of Relativity | 5 | The Lorentz contraction | Now let us return to the Lorentz transformation (15.3) and try to get a better understanding of the relationship between the $(x,y,z,t)$ and the $(x',y',z',t')$ coordinate systems, which we shall call the $S$ and $S'$ systems, or Joe and Moe systems, respectively. We have already noted that the first equation is based on the Lorentz suggestion of contraction along the $x$-direction; how can we prove that a contraction takes place? In the Michelson-Morley experiment, we now appreciate that the transverse arm $BC$ cannot change length, by the principle of relativity; yet the null result of the experiment demands that the times must be equal. So, in order for the experiment to give a null result, the longitudinal arm $BE$ must appear shorter, by the square root $\sqrt{1 - u^2/c^2}$. What does this contraction mean, in terms of measurements made by Joe and Moe? Suppose that Moe, moving with the $S'$ system in the $x$-direction, is measuring the $x'$-coordinate of some point with a meter stick. He lays the stick down $x'$ times, so he thinks the distance is $x'$ meters. From the viewpoint of Joe in the $S$ system, however, Moe is using a foreshortened ruler, so the “real” distance measured is $x'\sqrt{1-u^2/c^2}$ meters. Then if the $S'$ system has travelled a distance $ut$ away from the $S$ system, the $S$ observer would say that the same point, measured in his coordinates, is at a distance $x = x'\sqrt{1-u^2/c^2}+ut$, or \begin{equation*} x'=\frac{x-ut}{\sqrt{1-u^2/c^2}}, \end{equation*} which is the first equation of the Lorentz transformation. |
|
1 | 15 | The Special Theory of Relativity | 6 | Simultaneity | In an analogous way, because of the difference in time scales, the denominator expression is introduced into the fourth equation of the Lorentz transformation. The most interesting term in that equation is the $ux/c^2$ in the numerator, because that is quite new and unexpected. Now what does that mean? If we look at the situation carefully we see that events that occur at two separated places at the same time, as seen by Moe in $S'$, do not happen at the same time as viewed by Joe in $S$. If one event occurs at point $x_1$ at time $t_0$ and the other event at $x_2$ and $t_0$ (the same time), we find that the two corresponding times $t_1'$ and $t_2'$ differ by an amount \begin{equation*} t_2'-t_1'=\frac{u(x_1-x_2)/c^2}{\sqrt{1-u^2/c^2}}. \end{equation*} This circumstance is called “failure of simultaneity at a distance,” and to make the idea a little clearer let us consider the following experiment. Suppose that a man moving in a space ship (system $S'$) has placed a clock at each end of the ship and is interested in making sure that the two clocks are in synchronism. How can the clocks be synchronized? There are many ways. One way, involving very little calculation, would be first to locate exactly the midpoint between the clocks. Then from this station we send out a light signal which will go both ways at the same speed and will arrive at both clocks, clearly, at the same time. This simultaneous arrival of the signals can be used to synchronize the clocks. Let us then suppose that the man in $S'$ synchronizes his clocks by this particular method. Let us see whether an observer in system $S$ would agree that the two clocks are synchronous. The man in $S'$ has a right to believe they are, because he does not know that he is moving. But the man in $S$ reasons that since the ship is moving forward, the clock in the front end was running away from the light signal, hence the light had to go more than halfway in order to catch up; the rear clock, however, was advancing to meet the light signal, so this distance was shorter. Therefore the signal reached the rear clock first, although the man in $S'$ thought that the signals arrived simultaneously. We thus see that when a man in a space ship thinks the times at two locations are simultaneous, equal values of $t'$ in his coordinate system must correspond to different values of $t$ in the other coordinate system! |
|
1 | 15 | The Special Theory of Relativity | 7 | Four-vectors | Let us see what else we can discover in the Lorentz transformation. It is interesting to note that the transformation between the $x$’s and $t$’s is analogous in form to the transformation of the $x$’s and $y$’s that we studied in Chapter 11 for a rotation of coordinates. We then had \begin{equation} \begin{alignedat}{4} &x'&&=x&&\cos\theta+y&&\sin\theta,\\ &y'&&=y&&\cos\theta-x&&\sin\theta, \end{alignedat} \label{Eq:I:15:8} \end{equation} in which the new $x'$ mixes the old $x$ and $y$, and the new $y'$ also mixes the old $x$ and $y$; similarly, in the Lorentz transformation we find a new $x'$ which is a mixture of $x$ and $t$, and a new $t'$ which is a mixture of $t$ and $x$. So the Lorentz transformation is analogous to a rotation, only it is a “rotation” in space and time, which appears to be a strange concept. A check of the analogy to rotation can be made by calculating the quantity \begin{equation} \label{Eq:I:15:9} x'^2\!+y'^2\!+z'^2\!-c^2t'^2\!=x^2\!+y^2\!+z^2\!-c^2t^2\!. \end{equation} In this equation the first three terms on each side represent, in three-dimensional geometry, the square of the distance between a point and the origin (surface of a sphere) which remains unchanged (invariant) regardless of rotation of the coordinate axes. Similarly, Eq. (15.9) shows that there is a certain combination which includes time, that is invariant to a Lorentz transformation. Thus, the analogy to a rotation is complete, and is of such a kind that vectors, i.e., quantities involving “components” which transform the same way as the coordinates and time, are also useful in connection with relativity. Thus we contemplate an extension of the idea of vectors, which we have so far considered to have only space components, to include a time component. That is, we expect that there will be vectors with four components, three of which are like the components of an ordinary vector, and with these will be associated a fourth component, which is the analog of the time part. This concept will be analyzed further in the next chapters, where we shall find that if the ideas of the preceding paragraph are applied to momentum, the transformation gives three space parts that are like ordinary momentum components, and a fourth component, the time part, which is the energy. |
|
1 | 15 | The Special Theory of Relativity | 8 | Relativistic dynamics | We are now ready to investigate, more generally, what form the laws of mechanics take under the Lorentz transformation. [We have thus far explained how length and time change, but not how we get the modified formula for $m$ (Eq. 15.1). We shall do this in the next chapter.] To see the consequences of Einstein’s modification of $m$ for Newtonian mechanics, we start with the Newtonian law that force is the rate of change of momentum, or \begin{equation*} \FLPF=d(m\FLPv)/dt. \end{equation*} Momentum is still given by $m\FLPv$, but when we use the new $m$ this becomes \begin{equation} \label{Eq:I:15:10} \FLPp=m\FLPv=\frac{m_0\FLPv}{\sqrt{1-v^2/c^2}}. \end{equation} This is Einstein’s modification of Newton’s laws. Under this modification, if action and reaction are still equal (which they may not be in detail, but are in the long run), there will be conservation of momentum in the same way as before, but the quantity that is being conserved is not the old $m\FLPv$ with its constant mass, but instead is the quantity shown in (15.10), which has the modified mass. When this change is made in the formula for momentum, conservation of momentum still works. Now let us see how momentum varies with speed. In Newtonian mechanics it is proportional to the speed and, according to (15.10), over a considerable range of speed, but small compared with $c$, it is nearly the same in relativistic mechanics, because the square-root expression differs only slightly from $1$. But when $v$ is almost equal to $c$, the square-root expression approaches zero, and the momentum therefore goes toward infinity. What happens if a constant force acts on a body for a long time? In Newtonian mechanics the body keeps picking up speed until it goes faster than light. But this is impossible in relativistic mechanics. In relativity, the body keeps picking up, not speed, but momentum, which can continually increase because the mass is increasing. After a while there is practically no acceleration in the sense of a change of velocity, but the momentum continues to increase. Of course, whenever a force produces very little change in the velocity of a body, we say that the body has a great deal of inertia, and that is exactly what our formula for relativistic mass says (see Eq. 15.10)—it says that the inertia is very great when $v$ is nearly as great as $c$. As an example of this effect, to deflect the high-speed electrons in the synchrotron that is used here at Caltech, we need a magnetic field that is $2000$ times stronger than would be expected on the basis of Newton’s laws. In other words, the mass of the electrons in the synchrotron is $2000$ times as great as their normal mass, and is as great as that of a proton! That $m$ should be $2000$ times $m_0$ means that $1 - v^2/c^2$ must be $1/4{,}000{,}000$, and that means that $v$ differs from $c$ by one part in $8{,}000{,}000$, so the electrons are getting pretty close to the speed of light. If the electrons and light were both to start from the synchrotron (estimated as $700$ feet away) and rush out to Bridge Lab, which would arrive first? The light, of course, because light always travels faster.1 How much earlier? That is too hard to tell—instead, we tell by what distance the light is ahead: it is about $1/1000$ of an inch, or $\tfrac{1}{4}$ the thickness of a piece of paper! When the electrons are going that fast their masses are enormous, but their speed cannot exceed the speed of light. Now let us look at some further consequences of relativistic change of mass. Consider the motion of the molecules in a small tank of gas. When the gas is heated, the speed of the molecules is increased, and therefore the mass is also increased and the gas is heavier. An approximate formula to express the increase of mass, for the case when the velocity is small, can be found by expanding $m_0/\sqrt{1 - v^2/c^2} = m_0(1 - v^2/c^2)^{-1/2}$ in a power series, using the binomial theorem. We get \begin{equation*} m_0(1 - v^2/c^2)^{-1/2}=m_0(1+\tfrac{1}{2}v^2/c^2+ \tfrac{3}{8}v^4/c^4+\dotsb). \end{equation*} We see clearly from the formula that the series converges rapidly when $v$ is small, and the terms after the first two or three are negligible. So we can write \begin{equation} \label{Eq:I:15:11} m\cong m_0+\tfrac{1}{2}m_0v^2\biggl(\frac{1}{c^2}\biggr) \end{equation} in which the second term on the right expresses the increase of mass due to molecular velocity. When the temperature increases the $v^2$ increases proportionately, so we can say that the increase in mass is proportional to the increase in temperature. But since $\tfrac{1}{2}m_0v^2$ is the kinetic energy in the old-fashioned Newtonian sense, we can also say that the increase in mass of all this body of gas is equal to the increase in kinetic energy divided by $c^2$, or $\Delta m = \Delta(\text{K.E.})/c^2$. |
|
1 | 15 | The Special Theory of Relativity | 9 | Equivalence of mass and energy | The above observation led Einstein to the suggestion that the mass of a body can be expressed more simply than by the formula (15.1), if we say that the mass is equal to the total energy content divided by $c^2$. If Eq. (15.11) is multiplied by $c^2$ the result is \begin{equation} \label{Eq:I:15:12} mc^2=m_0c^2+\tfrac{1}{2}m_0v^2+\dotsb \end{equation} Here, the term on the left expresses the total energy of a body, and we recognize the last term as the ordinary kinetic energy. Einstein interpreted the large constant term, $m_0c^2$, to be part of the total energy of the body, an intrinsic energy known as the “rest energy.” Let us follow out the consequences of assuming, with Einstein, that the energy of a body always equals $mc^2$. As an interesting result, we shall find the formula (15.1) for the variation of mass with speed, which we have merely assumed up to now. We start with the body at rest, when its energy is $m_0c^2$. Then we apply a force to the body, which starts it moving and gives it kinetic energy; therefore, since the energy has increased, the mass has increased—this is implicit in the original assumption. So long as the force continues, the energy and the mass both continue to increase. We have already seen (Chapter 13) that the rate of change of energy with time equals the force times the velocity, or \begin{equation} \label{Eq:I:15:13} \ddt{E}{t}=\FLPF\cdot \FLPv. \end{equation} We also have (Chapter 9, Eq. 9.1) that $F = d(mv)/dt$. When these relations are put together with the definition of $E$, Eq. (15.13) becomes \begin{equation} \label{Eq:I:15:14} \ddt{(mc^2)}{t}=\FLPv\cdot\ddt{(m\FLPv)}{t}. \end{equation} We wish to solve this equation for $m$. To do this we first use the mathematical trick of multiplying both sides by $2m$, which changes the equation to \begin{equation} \label{Eq:I:15:15} c^2(2m)\,\ddt{m}{t}=2m\FLPv\cdot\ddt{(m\FLPv)}{t}. \end{equation} We need to get rid of the derivatives, which can be accomplished by integrating both sides. The quantity $(2m)\,dm/dt$ can be recognized as the time derivative of $m^2$, and $(2m\FLPv)\cdot d(m\FLPv)/dt$ is the time derivative of $(mv)^2$. So, Eq. (15.15) is the same as \begin{equation} \label{Eq:I:15:16} c^2\,\ddt{(m^2)}{t}=\ddt{(m^2v^2)}{t}. \end{equation} If the derivatives of two quantities are equal, the quantities themselves differ at most by a constant, say $C$. This permits us to write \begin{equation} \label{Eq:I:15:17} m^2c^2=m^2v^2+C. \end{equation} We need to define the constant $C$ more explicitly. Since Eq. (15.17) must be true for all velocities, we can choose a special case where $v= 0$, and say that in this case the mass is $m_0$. Substituting these values into Eq. (15.17) gives \begin{equation*} m_0^2c^2=0+C. \end{equation*} We can now use this value of $C$ in Eq. (15.17), which becomes \begin{equation} \label{Eq:I:15:18} m^2c^2=m^2v^2+m_0^2c^2. \end{equation} Dividing by $c^2$ and rearranging terms gives \begin{equation} m^2(1-v^2/c^2)=m_0^2,\notag \end{equation} from which we get \begin{equation} \label{Eq:I:15:19} m=m_0/\sqrt{1-v^2/c^2}. \end{equation} This is the formula (15.1), and is exactly what is necessary for the agreement between mass and energy in Eq. (15.12). Ordinarily these energy changes represent extremely slight changes in mass, because most of the time we cannot generate much energy from a given amount of material; but in an atomic bomb of explosive energy equivalent to $20$ kilotons of TNT, for example, it can be shown that the dirt after the explosion is lighter by $1$ gram than the initial mass of the reacting material, because of the energy that was released, i.e., the released energy had a mass of $1$ gram, according to the relationship $\Delta E=\Delta(mc^2)$. This theory of equivalence of mass and energy has been beautifully verified by experiments in which matter is annihilated—converted totally to energy: An electron and a positron come together at rest, each with a rest mass $m_0$. When they come together they disintegrate and two gamma rays emerge, each with the measured energy of $m_0c^2$. This experiment furnishes a direct determination of the energy associated with the existence of the rest mass of a particle. |
|
1 | 16 | Relativistic Energy and Momentum | 1 | Relativity and the philosophers | In this chapter we shall continue to discuss the principle of relativity of Einstein and Poincaré, as it affects our ideas of physics and other branches of human thought. Poincaré made the following statement of the principle of relativity: “According to the principle of relativity, the laws of physical phenomena must be the same for a fixed observer as for an observer who has a uniform motion of translation relative to him, so that we have not, nor can we possibly have, any means of discerning whether or not we are carried along in such a motion.” When this idea descended upon the world, it caused a great stir among philosophers, particularly the “cocktail-party philosophers,” who say, “Oh, it is very simple: Einstein’s theory says all is relative!” In fact, a surprisingly large number of philosophers, not only those found at cocktail parties (but rather than embarrass them, we shall just call them “cocktail-party philosophers”), will say, “That all is relative is a consequence of Einstein, and it has profound influences on our ideas.” In addition, they say “It has been demonstrated in physics that phenomena depend upon your frame of reference.” We hear that a great deal, but it is difficult to find out what it means. Probably the frames of reference that were originally referred to were the coordinate systems which we use in the analysis of the theory of relativity. So the fact that “things depend upon your frame of reference” is supposed to have had a profound effect on modern thought. One might well wonder why, because, after all, that things depend upon one’s point of view is so simple an idea that it certainly cannot have been necessary to go to all the trouble of the physical relativity theory in order to discover it. That what one sees depends upon his frame of reference is certainly known to anybody who walks around, because he sees an approaching pedestrian first from the front and then from the back; there is nothing deeper in most of the philosophy which is said to have come from the theory of relativity than the remark that “A person looks different from the front than from the back.” The old story about the elephant that several blind men describe in different ways is another example, perhaps, of the theory of relativity from the philosopher’s point of view. But certainly there must be deeper things in the theory of relativity than just this simple remark that “A person looks different from the front than from the back.” Of course relativity is deeper than this, because we can make definite predictions with it. It certainly would be rather remarkable if we could predict the behavior of nature from such a simple observation alone. There is another school of philosophers who feel very uncomfortable about the theory of relativity, which asserts that we cannot determine our absolute velocity without looking at something outside, and who would say, “It is obvious that one cannot measure his velocity without looking outside. It is self-evident that it is meaningless to talk about the velocity of a thing without looking outside; the physicists are rather stupid for having thought otherwise, but it has just dawned on them that this is the case. If only we philosophers had realized what the problems were that the physicists had, we could have decided immediately by brainwork that it is impossible to tell how fast one is moving without looking outside, and we could have made an enormous contribution to physics.” These philosophers are always with us, struggling in the periphery to try to tell us something, but they never really understand the subtleties and depths of the problem. Our inability to detect absolute motion is a result of experiment and not a result of plain thought, as we can easily illustrate. In the first place, Newton believed that it was true that one could not tell how fast he is going if he is moving with uniform velocity in a straight line. In fact, Newton first stated the principle of relativity, and one quotation made in the last chapter was a statement of Newton’s. Why then did the philosophers not make all this fuss about “all is relative,” or whatever, in Newton’s time? Because it was not until Maxwell’s theory of electrodynamics was developed that there were physical laws that suggested that one could measure his velocity without looking outside; soon it was found experimentally that one could not. Now, is it absolutely, definitely, philosophically necessary that one should not be able to tell how fast he is moving without looking outside? One of the consequences of relativity was the development of a philosophy which said, “You can only define what you can measure! Since it is self-evident that one cannot measure a velocity without seeing what he is measuring it relative to, therefore it is clear that there is no meaning to absolute velocity. The physicists should have realized that they can talk only about what they can measure.” But that is the whole problem: whether or not one can define absolute velocity is the same as the problem of whether or not one can detect in an experiment, without looking outside, whether he is moving. In other words, whether or not a thing is measurable is not something to be decided a priori by thought alone, but something that can be decided only by experiment. Given the fact that the velocity of light is $186{,}000$ mi/sec, one will find few philosophers who will calmly state that it is self-evident that if light goes $186{,}000$ mi/sec inside a car, and the car is going $100{,}000$ mi/sec, that the light also goes $186{,}000$ mi/sec past an observer on the ground. That is a shocking fact to them; the very ones who claim it is obvious find, when you give them a specific fact, that it is not obvious. Finally, there is even a philosophy which says that one cannot detect any motion except by looking outside. It is simply not true in physics. True, one cannot perceive a uniform motion in a straight line, but if the whole room were rotating we would certainly know it, for everybody would be thrown to the wall—there would be all kinds of “centrifugal” effects. That the earth is turning on its axis can be determined without looking at the stars, by means of the so-called Foucault pendulum, for example. Therefore it is not true that “all is relative”; it is only uniform velocity that cannot be detected without looking outside. Uniform rotation about a fixed axis can be. When this is told to a philosopher, he is very upset that he did not really understand it, because to him it seems impossible that one should be able to determine rotation about an axis without looking outside. If the philosopher is good enough, after some time he may come back and say, “I understand. We really do not have such a thing as absolute rotation; we are really rotating relative to the stars, you see. And so some influence exerted by the stars on the object must cause the centrifugal force.” Now, for all we know, that is true; we have no way, at the present time, of telling whether there would have been centrifugal force if there were no stars and nebulae around. We have not been able to do the experiment of removing all the nebulae and then measuring our rotation, so we simply do not know. We must admit that the philosopher may be right. He comes back, therefore, in delight and says, “It is absolutely necessary that the world ultimately turn out to be this way: absolute rotation means nothing; it is only relative to the nebulae.” Then we say to him, “Now, my friend, is it or is it not obvious that uniform velocity in a straight line, relative to the nebulae should produce no effects inside a car?” Now that the motion is no longer absolute, but is a motion relative to the nebulae, it becomes a mysterious question, and a question that can be answered only by experiment. What, then, are the philosophic influences of the theory of relativity? If we limit ourselves to influences in the sense of what kind of new ideas and suggestions are made to the physicist by the principle of relativity, we could describe some of them as follows. The first discovery is, essentially, that even those ideas which have been held for a very long time and which have been very accurately verified might be wrong. It was a shocking discovery, of course, that Newton’s laws are wrong, after all the years in which they seemed to be accurate. Of course it is clear, not that the experiments were wrong, but that they were done over only a limited range of velocities, so small that the relativistic effects would not have been evident. But nevertheless, we now have a much more humble point of view of our physical laws—everything can be wrong! Secondly, if we have a set of “strange” ideas, such as that time goes slower when one moves, and so forth, whether we like them or do not like them is an irrelevant question. The only relevant question is whether the ideas are consistent with what is found experimentally. In other words, the “strange ideas” need only agree with experiment, and the only reason that we have to discuss the behavior of clocks and so forth is to demonstrate that although the notion of the time dilation is strange, it is consistent with the way we measure time. Finally, there is a third suggestion which is a little more technical but which has turned out to be of enormous utility in our study of other physical laws, and that is to look at the symmetry of the laws or, more specifically, to look for the ways in which the laws can be transformed and leave their form the same. When we discussed the theory of vectors, we noted that the fundamental laws of motion are not changed when we rotate the coordinate system, and now we learn that they are not changed when we change the space and time variables in a particular way, given by the Lorentz transformation. So this idea of studying the patterns or operations under which the fundamental laws are not changed has proved to be a very useful one. |
|
1 | 16 | Relativistic Energy and Momentum | 2 | The twin paradox | To continue our discussion of the Lorentz transformation and relativistic effects, we consider a famous so-called “paradox” of Peter and Paul, who are supposed to be twins, born at the same time. When they are old enough to drive a space ship, Paul flies away at very high speed. Because Peter, who is left on the ground, sees Paul going so fast, all of Paul’s clocks appear to go slower, his heart beats go slower, his thoughts go slower, everything goes slower, from Peter’s point of view. Of course, Paul notices nothing unusual, but if he travels around and about for a while and then comes back, he will be younger than Peter, the man on the ground! That is actually right; it is one of the consequences of the theory of relativity which has been clearly demonstrated. Just as the muons last longer when they are moving, so also will Paul last longer when he is moving. This is called a “paradox” only by the people who believe that the principle of relativity means that all motion is relative; they say, “Heh, heh, heh, from the point of view of Paul, can’t we say that Peter was moving and should therefore appear to age more slowly? By symmetry, the only possible result is that both should be the same age when they meet.” But in order for them to come back together and make the comparison, Paul must either stop at the end of the trip and make a comparison of clocks or, more simply, he has to come back, and the one who comes back must be the man who was moving, and he knows this, because he had to turn around. When he turned around, all kinds of unusual things happened in his space ship—the rockets went off, things jammed up against one wall, and so on—while Peter felt nothing. So the way to state the rule is to say that the man who has felt the accelerations, who has seen things fall against the walls, and so on, is the one who would be the younger; that is the difference between them in an “absolute” sense, and it is certainly correct. When we discussed the fact that moving muons live longer, we used as an example their straight-line motion in the atmosphere. But we can also make muons in a laboratory and cause them to go in a curve with a magnet, and even under this accelerated motion, they last exactly as much longer as they do when they are moving in a straight line. Although no one has arranged an experiment explicitly so that we can get rid of the paradox, one could compare a muon which is left standing with one that had gone around a complete circle, and it would surely be found that the one that went around the circle lasted longer. Although we have not actually carried out an experiment using a complete circle, it is really not necessary, of course, because everything fits together all right. This may not satisfy those who insist that every single fact be demonstrated directly, but we confidently predict the result of the experiment in which Paul goes in a complete circle. |
|
1 | 16 | Relativistic Energy and Momentum | 3 | Transformation of velocities | The main difference between the relativity of Einstein and the relativity of Newton is that the laws of transformation connecting the coordinates and times between relatively moving systems are different. The correct transformation law, that of Lorentz, is \begin{equation} \begin{aligned} x'&=\frac{x-ut}{\sqrt{1-u^2/c^2}},\\ y'&=y,\\[2ex] z'&=z,\\ t'&=\frac{t-ux/c^2}{\sqrt{1-u^2/c^2}}. \end{aligned} \label{Eq:I:16:1} \end{equation} These equations correspond to the relatively simple case in which the relative motion of the two observers is along their common $x$-axes. Of course other directions of motion are possible, but the most general Lorentz transformation is rather complicated, with all four quantities mixed up together. We shall continue to use this simpler form, since it contains all the essential features of relativity. Let us now discuss more of the consequences of this transformation. First, it is interesting to solve these equations in reverse. That is, here is a set of linear equations, four equations with four unknowns, and they can be solved in reverse, for $x,y,z,t$ in terms of $x',y',z',t'$. The result is very interesting, since it tells us how a system of coordinates “at rest” looks from the point of view of one that is “moving.” Of course, since the motions are relative and of uniform velocity, the man who is “moving” can say, if he wishes, that it is really the other fellow who is moving and he himself who is at rest. And since he is moving in the opposite direction, he should get the same transformation, but with the opposite sign of velocity. That is precisely what we find by manipulation, so that is consistent. If it did not come out that way, we would have real cause to worry! \begin{equation} \begin{aligned} x&=\frac{x'+ut'}{\sqrt{1-u^2/c^2}},\\ y&=y',\\[2ex] z&=z',\\ t&=\frac{t'+ux'/c^2}{\sqrt{1-u^2/c^2}}. \end{aligned} \label{Eq:I:16:2} \end{equation} Next we discuss the interesting problem of the addition of velocities in relativity. We recall that one of the original puzzles was that light travels at $186{,}000$ mi/sec in all systems, even when they are in relative motion. This is a special case of the more general problem exemplified by the following. Suppose that an object inside a space ship is going at $100{,}000$ mi/sec and the space ship itself is going at $100{,}000$ mi/sec; how fast is the object inside the space ship moving from the point of view of an observer outside? We might want to say $200{,}000$ mi/sec, which is faster than the speed of light. This is very unnerving, because it is not supposed to be going faster than the speed of light! The general problem is as follows. Let us suppose that the object inside the ship, from the point of view of the man inside, is moving with velocity $v$, and that the space ship itself has a velocity $u$ with respect to the ground. We want to know with what velocity $v_x$ this object is moving from the point of view of the man on the ground. This is, of course, still but a special case in which the motion is in the $x$-direction. There will also be a transformation for velocities in the $y$-direction, or for any angle; these can be worked out as needed. Inside the space ship the velocity is $v_{x'}$, which means that the displacement $x'$ is equal to the velocity times the time: \begin{equation} \label{Eq:I:16:3} x'=v_{x'}t'. \end{equation} Now we have only to calculate what the position and time are from the point of view of the outside observer for an object which has the relation (16.2) between $x'$ and $t'$. So we simply substitute (16.3) into (16.2), and obtain \begin{equation} \label{Eq:I:16:4} x=\frac{v_{x'}t'+ut'}{\sqrt{1-u^2/c^2}}. \end{equation} But here we find $x$ expressed in terms of $t'$. In order to get the velocity as seen by the man on the outside, we must divide his distance by his time, not by the other man’s time! So we must also calculate the time as seen from the outside, which is \begin{equation} \label{Eq:I:16:5} t=\frac{t'+u(v_{x'}t')/c^2}{\sqrt{1-u^2/c^2}}. \end{equation} Now we must find the ratio of $x$ to $t$, which is \begin{equation} \label{Eq:I:16:6} v_x=\frac{x}{t}=\frac{u+v_{x'}}{1+uv_{x'}/c^2}, \end{equation} the square roots having cancelled. This is the law that we seek: the resultant velocity, the “summing” of two velocities, is not just the algebraic sum of two velocities (we know that it cannot be or we get in trouble), but is “corrected” by $1+uv/c^2$. Now let us see what happens. Suppose that you are moving inside the space ship at half the speed of light, and that the space ship itself is going at half the speed of light. Thus $u$ is $\tfrac{1}{2}c$ and $v$ is $\tfrac{1}{2}c$, but in the denominator $uv/c^2$ is one-fourth, so that \begin{equation*} v=\frac{\tfrac{1}{2}c+\tfrac{1}{2}c}{1+\tfrac{1}{4}}=\frac{4c}{5}. \end{equation*} So, in relativity, “half” and “half” does not make “one,” it makes only “$4/5$.” Of course low velocities can be added quite easily in the familiar way, because so long as the velocities are small compared with the speed of light we can forget about the $(1 + uv/c^2)$ factor; but things are quite different and quite interesting at high velocity. Let us take a limiting case. Just for fun, suppose that inside the space ship the man was observing light itself. In other words, $v = c$, and yet the space ship is moving. How will it look to the man on the ground? The answer will be \begin{equation*} v=\frac{u+c}{1+uc/c^2}=c\,\frac{u+c}{u+c}=c. \end{equation*} Therefore, if something is moving at the speed of light inside the ship, it will appear to be moving at the speed of light from the point of view of the man on the ground too! This is good, for it is, in fact, what the Einstein theory of relativity was designed to do in the first place—so it had better work! Of course, there are cases in which the motion is not in the direction of the uniform translation. For example, there may be an object inside the ship which is just moving “upward” with the velocity $v_{y'}$ with respect to the ship, and the ship is moving “horizontally.” Now, we simply go through the same thing, only using $y$’s instead of $x$’s, with the result \begin{equation} y=y'=v_{y'}t',\notag \end{equation} so that if $v_{x'} = 0$, \begin{equation} \label{Eq:I:16:7} v_y=\frac{y}{t}=v_{y'}\sqrt{1-u^2/c^2}. \end{equation} Thus a sidewise velocity is no longer $v_{y'}$, but $v_{y'}\sqrt{1 - u^2/c^2}$. We found this result by substituting and combining the transformation equations, but we can also see the result directly from the principle of relativity for the following reason (it is always good to look again to see whether we can see the reason). We have already (Fig. 15–3) seen how a possible clock might work when it is moving; the light appears to travel at an angle at the speed $c$ in the fixed system, while it simply goes vertically with the same speed in the moving system. We found that the vertical component of the velocity in the fixed system is less than that of light by the factor $\sqrt{1 - u^2/c^2}$ (see Eq. 15.3). But now suppose that we let a material particle go back and forth in this same “clock,” but at some integral fraction $1/n$ of the speed of light (Fig. 16–1). Then when the particle has gone back and forth once, the light will have gone exactly $n$ times. That is, each “click” of the “particle” clock will coincide with each $n$th “click” of the light clock. This fact must still be true when the whole system is moving, because the physical phenomenon of coincidence will be a coincidence in any frame. Therefore, since the speed $c_y$ is less than the speed of light, the speed $v_y$ of the particle must be slower than the corresponding speed by the same square-root ratio! That is why the square root appears in any vertical velocity. |
|
1 | 16 | Relativistic Energy and Momentum | 4 | Relativistic mass | We learned in the last chapter that the mass of an object increases with velocity, but no demonstration of this was given, in the sense that we made no arguments analogous to those about the way clocks have to behave. However, we can show that, as a consequence of relativity plus a few other reasonable assumptions, the mass must vary in this way. (We have to say “a few other assumptions” because we cannot prove anything unless we have some laws which we assume to be true, if we expect to make meaningful deductions.) To avoid the need to study the transformation laws of force, we shall analyze a collision, where we need know nothing about the laws of force, except that we shall assume the conservation of momentum and energy. Also, we shall assume that the momentum of a particle which is moving is a vector and is always directed in the direction of the velocity. However, we shall not assume that the momentum is a constant times the velocity, as Newton did, but only that it is some function of velocity. We thus write the momentum vector as a certain coefficient times the vector velocity: \begin{equation} \label{Eq:I:16:8} \FLPp=m_v\FLPv. \end{equation} We put a subscript $v$ on the coefficient to remind us that it is a function of velocity, and we shall agree to call this coefficient $m_v$ the “mass.” Of course, when the velocity is small, it is the same mass that we would measure in the slow-moving experiments that we are used to. Now we shall try to demonstrate that the formula for $m_v$ must be $m_0/\sqrt{1-v^2/c^2}$, by arguing from the principle of relativity that the laws of physics must be the same in every coordinate system. Suppose that we have two particles, like two protons, that are absolutely equal, and they are moving toward each other with exactly equal velocities. Their total momentum is zero. Now what can happen? After the collision, their directions of motion must be exactly opposite to each other, because if they are not exactly opposite, there will be a nonzero total vector momentum, and momentum would not have been conserved. Also they must have the same speeds, since they are exactly similar objects; in fact, they must have the same speed they started with, since we suppose that the energy is conserved in these collisions. So the diagram of an elastic collision, a reversible collision, will look like Fig. 16–2(a): all the arrows are the same length, all the speeds are equal. We shall suppose that such collisions can always be arranged, that any angle $\theta$ can occur, and that any speed could be used in such a collision. Next, we notice that this same collision can be viewed differently by turning the axes, and just for convenience we shall turn the axes, so that the horizontal splits it evenly, as in Fig. 16–2(b). It is the same collision redrawn, only with the axes turned. Now here is the real trick: let us look at this collision from the point of view of someone riding along in a car that is moving with a speed equal to the horizontal component of the velocity of one particle. Then how does the collision look? It looks as though particle $1$ is just going straight up, because it has lost its horizontal component, and it comes straight down again, also because it does not have that component. That is, the collision appears as shown in Fig. 16–3(a). Particle $2$, however, was going the other way, and as we ride past it appears to fly by at some terrific speed and at a smaller angle, but we can appreciate that the angles before and after the collision are the same. Let us denote by $u$ the horizontal component of the velocity of particle $2$, and by $w$ the vertical velocity of particle $1$. Now the question is, what is the vertical velocity $u\tan\alpha$? If we knew that, we could get the correct expression for the momentum, using the law of conservation of momentum in the vertical direction. Clearly, the horizontal component of the momentum is conserved: it is the same before and after the collision for both particles, and is zero for particle $1$. So we need to use the conservation law only for the upward velocity $u\tan\alpha$. But we can get the upward velocity, simply by looking at the same collision going the other way! If we look at the collision of Fig. 16–3(a) from a car moving to the left with speed $u$, we see the same collision, except “turned over,” as shown in Fig. 16–3(b). Now particle $2$ is the one that goes up and down with speed $w$, and particle $1$ has picked up the horizontal speed $u$. Of course, now we know what the velocity $u\tan\alpha$ is: it is $w\sqrt{1 - u^2/c^2}$ (see Eq. 16.7). We know that the change in the vertical momentum of the vertically moving particle is \begin{equation*} \Delta p=2m_ww \end{equation*} ($2$, because it moves up and back down). The obliquely moving particle has a certain velocity $v$ whose components we have found to be $u$ and $w\sqrt{1 - u^2/c^2}$, and whose mass is $m_v$. The change in vertical momentum of this particle is therefore $\Delta p' = 2m_v w\sqrt{1 - u^2/c^2}$ because, in accordance with our assumed law (16.8), the momentum component is always the mass corresponding to the magnitude of the velocity times the component of the velocity in the direction of interest. Thus in order for the total momentum to be zero the vertical momenta must cancel and the ratio of the mass moving with speed $v$ and the mass moving with speed $w$ must therefore be \begin{equation} \label{Eq:I:16:9} \frac{m_w}{m_v}=\sqrt{1-u^2/c^2}. \end{equation} Let us take the limiting case that $w$ is infinitesimal. If $w$ is very tiny indeed, it is clear that $v$ and $u$ are practically equal. In this case, $m_w \to m_0$ and $m_v \to m_u$. The grand result is \begin{equation} \label{Eq:I:16:10} m_u=\frac{m_0}{\sqrt{1-u^2/c^2}}. \end{equation} It is an interesting exercise now to check whether or not Eq. (16.9) is indeed true for arbitrary values of $w$, assuming that Eq. (16.10) is the right formula for the mass. Note that the velocity $v$ needed in Eq. (16.9) can be calculated from the right-angle triangle: \begin{equation*} v^2=u^2+w^2(1-u^2/c^2). \end{equation*} It will be found to check out automatically, although we used it only in the limit of small $w$. Now, let us accept that momentum is conserved and that the mass depends upon the velocity according to (16.10) and go on to find what else we can conclude. Let us consider what is commonly called an inelastic collision. For simplicity, we shall suppose that two objects of the same kind, moving oppositely with equal speeds $w$, hit each other and stick together, to become some new, stationary object, as shown in Fig. 16–4(a). The mass $m$ of each corresponds to $w$, which, as we know, is $m_0/\sqrt{1 - w^2/c^2}$. If we assume the conservation of momentum and the principle of relativity, we can demonstrate an interesting fact about the mass of the new object which has been formed. We imagine an infinitesimal velocity $u$ at right angles to $w$ (we can do the same with finite values of $u$, but it is easier to understand with an infinitesimal velocity), then look at this same collision as we ride by in an elevator at the velocity $-u$. What we see is shown in Fig. 16–4(b). The composite object has an unknown mass $M$. Now object $1$ moves with an upward component of velocity $u$ and a horizontal component which is practically equal to $w$, and so also does object $2$. After the collision we have the mass $M$ moving upward with velocity $u$, considered very small compared with the speed of light, and also small compared with $w$. Momentum must be conserved, so let us estimate the momentum in the upward direction before and after the collision. Before the collision we have $p \approx 2m_w u$, and after the collision the momentum is evidently $p' = M_u u$, but $M_u$ is essentially the same as $M_0$ because $u$ is so small. These momenta must be equal because of the conservation of momentum, and therefore \begin{equation} \label{Eq:I:16:11} M_0 = 2m_w. \end{equation} The mass of the object which is formed when two equal objects collide must be twice the mass of the objects which come together. You might say, “Yes, of course, that is the conservation of mass.” But not “Yes, of course,” so easily, because these masses have been enhanced over the masses that they would be if they were standing still, yet they still contribute, to the total $M$, not the mass they have when standing still, but more. Astonishing as that may seem, in order for the conservation of momentum to work when two objects come together, the mass that they form must be greater than the rest masses of the objects, even though the objects are at rest after the collision! |
|
1 | 16 | Relativistic Energy and Momentum | 5 | Relativistic energy | In the last chapter we demonstrated that as a result of the dependence of the mass on velocity and Newton’s laws, the changes in the kinetic energy of an object resulting from the total work done by the forces on it always comes out to be \begin{equation} \label{Eq:I:16:12} \Delta T=(m_u - m_0)c^2=\frac{m_0c^2}{\sqrt{1-u^2/c^2}}-m_0c^2. \end{equation} We even went further, and guessed that the total energy is the total mass times $c^2$. Now we continue this discussion. Suppose that our two equally massive objects that collide can still be “seen” inside $M$. For instance, a proton and a neutron are “stuck together,” but are still moving about inside of $M$. Then, although we might at first expect the mass $M$ to be $2m_0$, we have found that it is not $2m_0$, but $2m_w$. Since $2m_w$ is what is put in, but $2m_0$ are the rest masses of the things inside, the excess mass of the composite object is equal to the kinetic energy brought in. This means, of course, that energy has inertia. In the last chapter we discussed the heating of a gas, and showed that because the gas molecules are moving and moving things are heavier, when we put energy into the gas its molecules move faster and so the gas gets heavier. But in fact the argument is completely general, and our discussion of the inelastic collision shows that the mass is there whether or not it is kinetic energy. In other words, if two particles come together and produce potential or any other form of energy; if the pieces are slowed down by climbing hills, doing work against internal forces, or whatever; then it is still true that the mass is the total energy that has been put in. So we see that the conservation of mass which we have deduced above is equivalent to the conservation of energy, and therefore there is no place in the theory of relativity for strictly inelastic collisions, as there was in Newtonian mechanics. According to Newtonian mechanics it is all right for two things to collide and so form an object of mass $2m_0$ which is in no way distinct from the one that would result from putting them together slowly. Of course we know from the law of conservation of energy that there is more kinetic energy inside, but that does not affect the mass, according to Newton’s laws. But now we see that this is impossible; because of the kinetic energy involved in the collision, the resulting object will be heavier; therefore, it will be a different object. When we put the objects together gently they make something whose mass is $2m_0$; when we put them together forcefully, they make something whose mass is greater. When the mass is different, we can tell that it is different. So, necessarily, the conservation of energy must go along with the conservation of momentum in the theory of relativity. This has interesting consequences. For example, suppose that we have an object whose mass $M$ is measured, and suppose something happens so that it flies into two equal pieces moving with speed $w$, so that they each have a mass $m_w$. Now suppose that these pieces encounter enough material to slow them up until they stop; then they will have mass $m_0$. How much energy will they have given to the material when they have stopped? Each will give an amount $(m_w - m_0)c^2$, by the theorem that we proved before. This much energy is left in the material in some form, as heat, potential energy, or whatever. Now $2m_w = M$, so the liberated energy is $E=(M - 2m_0)c^2$. This equation was used to estimate how much energy would be liberated under fission in the atomic bomb, for example. (Although the fragments are not exactly equal, they are nearly equal.) The mass of the uranium atom was known—it had been measured ahead of time—and the atoms into which it split, iodine, xenon, and so on, all were of known mass. By masses, we do not mean the masses while the atoms are moving, we mean the masses when the atoms are at rest. In other words, both $M$ and $m_0$ are known. So by subtracting the two numbers one can calculate how much energy will be released if $M$ can be made to split in “half.” For this reason poor old Einstein was called the “father” of the atomic bomb in all the newspapers. Of course, all that meant was that he could tell us ahead of time how much energy would be released if we told him what process would occur. The energy that should be liberated when an atom of uranium undergoes fission was estimated about six months before the first direct test, and as soon as the energy was in fact liberated, someone measured it directly (and if Einstein’s formula had not worked, they would have measured it anyway), and the moment they measured it they no longer needed the formula. Of course, we should not belittle Einstein, but rather should criticize the newspapers and many popular descriptions of what causes what in the history of physics and technology. The problem of how to get the thing to occur in an effective and rapid manner is a completely different matter. The result is just as significant in chemistry. For instance, if we were to weigh the carbon dioxide molecule and compare its mass with that of the carbon and the oxygen, we could find out how much energy would be liberated when carbon and oxygen form carbon dioxide. The only trouble here is that the differences in masses are so small that it is technically very difficult to do. Now let us turn to the question of whether we should add $m_0c^2$ to the kinetic energy and say from now on that the total energy of an object is $mc^2$. First, if we can still see the component pieces of rest mass $m_0$ inside $M$, then we could say that some of the mass $M$ of the compound object is the mechanical rest mass of the parts, part of it is kinetic energy of the parts, and part of it is potential energy of the parts. But we have discovered, in nature, particles of various kinds which undergo reactions just like the one we have treated above, in which with all the study in the world, we cannot see the parts inside. For instance, when a K-meson disintegrates into two pions it does so according to the law (16.11), but the idea that a K is made out of $2$ $\pi$’s is a useless idea, because it also disintegrates into $3$ $\pi$’s! Therefore we have a new idea: we do not have to know what things are made of inside; we cannot and need not identify, inside a particle, which of the energy is rest energy of the parts into which it is going to disintegrate. It is not convenient and often not possible to separate the total $mc^2$ energy of an object into rest energy of the inside pieces, kinetic energy of the pieces, and potential energy of the pieces; instead, we simply speak of the total energy of the particle. We “shift the origin” of energy by adding a constant $m_0c^2$ to everything, and say that the total energy of a particle is the mass in motion times $c^2$, and when the object is standing still, the energy is the mass at rest times $c^2$. Finally, we find that the velocity $v$, momentum $P$, and total energy $E$ are related in a rather simple way. That the mass in motion at speed $v$ is the mass $m_0$ at rest divided by $\sqrt{1 - v^2/c^2}$, surprisingly enough, is rarely used. Instead, the following relations are easily proved, and turn out to be very useful: \begin{equation} \label{Eq:I:16:13} E^2-P^2c^2=m_0^2c^4 \end{equation} and \begin{equation} \label{Eq:I:16:14} Pc=Ev/c. \end{equation} |
|
1 | 17 | Space-Time | 1 | The geometry of space-time | The theory of relativity shows us that the relationships of positions and times as measured in one coordinate system and another are not what we would have expected on the basis of our intuitive ideas. It is very important that we thoroughly understand the relations of space and time implied by the Lorentz transformation, and therefore we shall consider this matter more deeply in this chapter. The Lorentz transformation between the positions and times $(x,y,z,t)$ as measured by an observer “standing still,” and the corresponding coordinates and time $(x',y',z',t')$ measured inside a “moving” space ship, moving with velocity $u$ are \begin{equation} \begin{aligned} x'&=\frac{x-ut}{\sqrt{1-u^2/c^2}},\\ y'&=y,\\[2ex] z'&=z,\\ t'&=\frac{t-ux/c^2}{\sqrt{1-u^2/c^2}}. \end{aligned} \label{Eq:I:17:1} \end{equation} Let us compare these equations with Eq. (11.5), which also relates measurements in two systems, one of which in this instance is rotated relative to the other: \begin{equation} \begin{alignedat}{4} &x'&&=x&&\cos\theta+y&&\sin\theta,\\ &y'&&=y&&\cos\theta-x&&\sin\theta,\\ &z'&&=z&&. \end{alignedat} \label{Eq:I:17:2} \end{equation} In this particular case, Moe and Joe are measuring with axes having an angle $\theta$ between the $x'$- and $x$-axes. In each case, we note that the “primed” quantities are “mixtures” of the “unprimed” ones: the new $x'$ is a mixture of $x$ and $y$, and the new $y'$ is also a mixture of $x$ and $y$. An analogy is useful: When we look at an object, there is an obvious thing we might call the “apparent width,” and another we might call the “depth.” But the two ideas, width and depth, are not fundamental properties of the object, because if we step aside and look at the same thing from a different angle, we get a different width and a different depth, and we may develop some formulas for computing the new ones from the old ones and the angles involved. Equations (17.2) are these formulas. One might say that a given depth is a kind of “mixture” of all depth and all width. If it were impossible ever to move, and we always saw a given object from the same position, then this whole business would be irrelevant—we would always see the “true” width and the “true” depth, and they would appear to have quite different qualities, because one appears as a subtended optical angle and the other involves some focusing of the eyes or even intuition; they would seem to be very different things and would never get mixed up. It is because we can walk around that we realize that depth and width are, somehow or other, just two different aspects of the same thing. Can we not look at the Lorentz transformations in the same way? Here also we have a mixture—of positions and the time. A difference between a space measurement and a time measurement produces a new space measurement. In other words, in the space measurements of one man there is mixed in a little bit of the time, as seen by the other. Our analogy permits us to generate this idea: The “reality” of an object that we are looking at is somehow greater (speaking crudely and intuitively) than its “width” and its “depth” because they depend upon how we look at it; when we move to a new position, our brain immediately recalculates the width and the depth. But our brain does not immediately recalculate coordinates and time when we move at high speed, because we have had no effective experience of going nearly as fast as light to appreciate the fact that time and space are also of the same nature. It is as though we were always stuck in the position of having to look at just the width of something, not being able to move our heads appreciably one way or the other; if we could, we understand now, we would see some of the other man’s time—we would see “behind,” so to speak, a little bit. Thus we shall try to think of objects in a new kind of world, of space and time mixed together, in the same sense that the objects in our ordinary space-world are real, and can be looked at from different directions. We shall then consider that objects occupying space and lasting for a certain length of time occupy a kind of a “blob” in a new kind of world, and that we look at this “blob” from different points of view when we are moving at different velocities. This new world, this geometrical entity in which the “blobs” exist by occupying position and taking up a certain amount of time, is called space-time. A given point $(x,y,z,t)$ in space-time is called an event. Imagine, for example, that we plot the $x$-positions horizontally, $y$ and $z$ in two other directions, both mutually at “right angles” and at “right angles” to the paper (!), and time, vertically. Now, how does a moving particle, say, look on such a diagram? If the particle is standing still, then it has a certain $x$, and as time goes on, it has the same $x$, the same $x$, the same $x$; so its “path” is a line that runs parallel to the $t$-axis (Fig. 17–1 a). On the other hand, if it drifts outward, then as the time goes on $x$ increases (Fig. 17–1 b). So a particle, for example, which starts to drift out and then slows up should have a motion something like that shown in Fig. 17–1(c). A particle, in other words, which is permanent and does not disintegrate is represented by a line in space-time. A particle which disintegrates would be represented by a forked line, because it would turn into two other things which would start from that point. What about light? Light travels at the speed $c$, and that would be represented by a line having a certain fixed slope (Fig. 17–1 d). Now according to our new idea, if a given event occurs to a particle, say if it suddenly disintegrates at a certain space-time point into two new ones which follow some new tracks, and this interesting event occurred at a certain value of $x$ and a certain value of $t$, then we would expect that, if this makes any sense, we just have to take a new pair of axes and turn them, and that will give us the new $t$ and the new $x$ in our new system, as shown in Fig. 17–2(a). But this is wrong, because Eq. (17.1) is not exactly the same mathematical transformation as Eq. (17.2). Note, for example, the difference in sign between the two, and the fact that one is written in terms of $\cos\theta$ and $\sin\theta$, while the other is written with algebraic quantities. (Of course, it is not impossible that the algebraic quantities could be written as cosine and sine, but actually they cannot.) But still, the two expressions are very similar. As we shall see, it is not really possible to think of space-time as a real, ordinary geometry because of that difference in sign. In fact, although we shall not emphasize this point, it turns out that a man who is moving has to use a set of axes which are inclined equally to the light ray, using a special kind of projection parallel to the $x'$- and $t'$-axes, for his $x'$ and $t'$, as shown in Fig. 17–2(b). We shall not deal with the geometry, since it does not help much; it is easier to work with the equations. |
|
1 | 17 | Space-Time | 2 | Space-time intervals | Although the geometry of space-time is not Euclidean in the ordinary sense, there is a geometry which is very similar, but peculiar in certain respects. If this idea of geometry is right, there ought to be some functions of coordinates and time which are independent of the coordinate system. For example, under ordinary rotations, if we take two points, one at the origin, for simplicity, and the other one somewhere else, both systems would have the same origin, and the distance from here to the other point is the same in both. That is one property that is independent of the particular way of measuring it. The square of the distance is $x^2 + y^2 + z^2$. Now what about space-time? It is not hard to demonstrate that we have here, also, something which stays the same, namely, the combination $c^2t^2 - x^2 - y^2 - z^2$ is the same before and after the transformation: \begin{equation} \label{Eq:I:17:3} c^2t'^2\!-x'^2\!-y'^2\!-z'^2\!=c^2t^2\!-x^2\!-y^2\!-z^2\!. \end{equation} This quantity is therefore something which, like the distance, is “real” in some sense; it is called the interval between the two space-time points, one of which is, in this case, at the origin. (Actually, of course, it is the interval squared, just as $x^2 + y^2 + z^2$ is the distance squared.) We give it a different name because it is in a different geometry, but the interesting thing is only that some signs are reversed and there is a $c$ in it. Let us get rid of the $c$; that is an absurdity if we are going to have a wonderful space with $x$’s and $y$’s that can be interchanged. One of the confusions that could be caused by someone with no experience would be to measure widths, say, by the angle subtended at the eye, and measure depth in a different way, like the strain on the muscles needed to focus them, so that the depths would be measured in feet and the widths in meters. Then one would get an enormously complicated mess of equations in making transformations such as (17.2), and would not be able to see the clarity and simplicity of the thing for a very simple technical reason, that the same thing is being measured in two different units. Now in Eqs. (17.1) and (17.3) nature is telling us that time and space are equivalent; time becomes space; they should be measured in the same units. What distance is a “second”? It is easy to figure out from (17.3) what it is. It is $3\times10^8$ meters, the distance that light would go in one second. In other words, if we were to measure all distances and times in the same units, seconds, then our unit of distance would be $3\times10^8$ meters, and the equations would be simpler. Or another way that we could make the units equal is to measure time in meters. What is a meter of time? A meter of time is the time it takes for light to go one meter, and is therefore $1/3\times10^{-8}$ sec, or $3.3$ billionths of a second! We would like, in other words, to put all our equations in a system of units in which $c = 1$. If time and space are measured in the same units, as suggested, then the equations are obviously much simplified. They are \begin{gather} \begin{aligned} x'&=\frac{x-ut}{\sqrt{1-u^2}},\\ y'&=y,\\[1.5ex] z'&=z,\\ t'&=\frac{t-ux}{\sqrt{1-u^2}}. \end{aligned} \label{Eq:I:17:4}\\[2.25ex] t'^2\!-x'^2\!-y'^2\!-z'^2\!=t^2\!-x^2\!-y^2\!-z^2\!. \label{Eq:I:17:5} \end{gather} If we are ever unsure or “frightened” that after we have this system with $c=1$ we shall never be able to get our equations right again, the answer is quite the opposite. It is much easier to remember them without the $c$’s in them, and it is always easy to put the $c$’s back, by looking after the dimensions. For instance, in $\sqrt{1 - u^2}$, we know that we cannot subtract a velocity squared, which has units, from the pure number $1$, so we know that we must divide $u^2$ by $c^2$ in order to make that unitless, and that is the way it goes. The difference between space-time and ordinary space, and the character of an interval as related to the distance, is very interesting. According to formula (17.5), if we consider a point which in a given coordinate system had zero time, and only space, then the interval squared would be negative and we would have an imaginary interval, the square root of a negative number. Intervals can be either real or imaginary in the theory. The square of an interval may be either positive or negative, unlike distance, which has a positive square. When an interval is imaginary, we say that the two points have a space-like interval between them (instead of imaginary), because the interval is more like space than like time. On the other hand, if two objects are at the same place in a given coordinate system, but differ only in time, then the square of the time is positive and the distances are zero and the interval squared is positive; this is called a time-like interval. In our diagram of space-time, therefore, we would have a representation something like this: at $45^\circ$ there are two lines (actually, in four dimensions these will be “cones,” called light cones) and points on these lines are all at zero interval from the origin. Where light goes from a given point is always separated from it by a zero interval, as we see from Eq. (17.5). Incidentally, we have just proved that if light travels with speed $c$ in one system, it travels with speed $c$ in another, for if the interval is the same in both systems, i.e., zero in one and zero in the other, then to state that the propagation speed of light is invariant is the same as saying that the interval is zero. |
|
1 | 17 | Space-Time | 3 | Past, present, and future | The space-time region surrounding a given space-time point can be separated into three regions, as shown in Fig. 17–3. In one region we have space-like intervals, and in two regions, time-like intervals. Physically, these three regions into which space-time around a given point is divided have an interesting physical relationship to that point: a physical object or a signal can get from a point in region $2$ to the event $O$ by moving along at a speed less than the speed of light. Therefore events in this region can affect the point $O$, can have an influence on it from the past. In fact, of course, an object at $P$ on the negative $t$-axis is precisely in the “past” with respect to $O$; it is the same space-point as $O$, only earlier. What happened there then, affects $O$ now. (Unfortunately, that is the way life is.) Another object at $Q$ can get to $O$ by moving with a certain speed less than $c$, so if this object were in a space ship and moving, it would be, again, the past of the same space-point. That is, in another coordinate system, the axis of time might go through both $O$ and $Q$. So all points of region $2$ are in the “past” of $O$, and anything that happens in this region can affect $O$. Therefore region $2$ is sometimes called the affective past, or affecting past; it is the locus of all events which can affect point $O$ in any way. Region $3$, on the other hand, is a region which we can affect from $O$, we can “hit” things by shooting “bullets” out at speeds less than $c$. So this is the world whose future can be affected by us, and we may call that the affective future. Now the interesting thing about all the rest of space-time, i.e., region $1$, is that we can neither affect it now from $O$, nor can it affect us now at $O$, because nothing can go faster than the speed of light. Of course, what happens at $R$ can affect us later; that is, if the sun is exploding “right now,” it takes eight minutes before we know about it, and it cannot possibly affect us before then. What we mean by “right now” is a mysterious thing which we cannot define and we cannot affect, but it can affect us later, or we could have affected it if we had done something far enough in the past. When we look at the star Alpha Centauri, we see it as it was four years ago; we might wonder what it is like “now.” “Now” means at the same time from our special coordinate system. We can only see Alpha Centauri by the light that has come from our past, up to four years ago, but we do not know what it is doing “now”; it will take four years before what it is doing “now” can affect us. Alpha Centauri “now” is an idea or concept of our mind; it is not something that is really definable physically at the moment, because we have to wait to observe it; we cannot even define it right “now.” Furthermore, the “now” depends on the coordinate system. If, for example, Alpha Centauri were moving, an observer there would not agree with us because he would put his axes at an angle, and his “now” would be a different time. We have already talked about the fact that simultaneity is not a unique thing. There are fortune tellers, or people who tell us they can know the future, and there are many wonderful stories about the man who suddenly discovers that he has knowledge about the affective future. Well, there are lots of paradoxes produced by that because if we know something is going to happen, then we can make sure we will avoid it by doing the right thing at the right time, and so on. But actually there is no fortune teller who can even tell us the present! There is no one who can tell us what is really happening right now, at any reasonable distance, because that is unobservable. We might ask ourselves this question, which we leave to the student to try to answer: Would any paradox be produced if it were suddenly to become possible to know things that are in the space-like intervals of region $1$? |
|
1 | 17 | Space-Time | 4 | More about four-vectors | Let us now return to our consideration of the analogy of the Lorentz transformation and rotations of the space axes. We have learned the utility of collecting together other quantities which have the same transformation properties as the coordinates, to form what we call vectors, directed lines. In the case of ordinary rotations, there are many quantities that transform the same way as $x$, $y$, and $z$ under rotation: for example, the velocity has three components, an $x$, $y$, and $z$-component; when seen in a different coordinate system, none of the components is the same, instead they are all transformed to new values. But, somehow or other, the velocity “itself” has a greater reality than do any of its particular components, and we represent it by a directed line. We therefore ask: Is it or is it not true that there are quantities which transform, or which are related, in a moving system and in a nonmoving system, in the same way as $x$, $y$, $z$, and $t$? From our experience with vectors, we know that three of the quantities, like $x$, $y$, $z$, would constitute the three components of an ordinary space-vector, but the fourth quantity would look like an ordinary scalar under space rotation, because it does not change so long as we do not go into a moving coordinate system. Is it possible, then, to associate with some of our known “three-vectors” a fourth object, that we could call the “time component,” in such a manner that the four objects together would “rotate” the same way as position and time in space-time? We shall now show that there is, indeed, at least one such thing (there are many of them, in fact): the three components of momentum, and the energy as the time component, transform together to make what we call a “four-vector.” In demonstrating this, since it is quite inconvenient to have to write $c$’s everywhere, we shall use the same trick concerning units of the energy, the mass, and the momentum, that we used in Eq. (17.4). Energy and mass, for example, differ only by a factor $c^2$ which is merely a question of units, so we can say energy is the mass. Instead of having to write the $c^2$, we put $E = m$, and then, of course, if there were any trouble we would put in the right amounts of $c$ so that the units would straighten out in the last equation, but not in the intermediate ones. Thus our equations for energy and momentum are \begin{equation} \begin{alignedat}{2} &E&&=m=m_0/\sqrt{1-v^2},\\[1ex] &\FLPp&&=m\FLPv=m_0\FLPv/\sqrt{1-v^2}. \end{alignedat} \label{Eq:I:17:6} \end{equation} Also in these units, we have \begin{equation} \label{Eq:I:17:7} E^2-p^2=m_0^2. \end{equation} For example, if we measure energy in electron volts, what does a mass of $1$ electron volt mean? It means the mass whose rest energy is $1$ electron volt, that is, $m_0c^2$ is one electron volt. For example, the rest mass of an electron is $0.511\times10^6$ eV. Now what would the momentum and energy look like in a new coordinate system? To find out, we shall have to transform Eq. (17.6), which we can do because we know how the velocity transforms. Suppose that, as we measure it, an object has a velocity $v$, but we look upon the same object from the point of view of a space ship which itself is moving with a velocity $u$, and in that system we use a prime to designate the corresponding thing. In order to simplify things at first, we shall take the case that the velocity $v$ is in the direction of $u$. (Later, we can do the more general case.) What is $v'$, the velocity as seen from the space ship? It is the composite velocity, the “difference” between $v$ and $u$. By the law which we worked out before, \begin{equation} \label{Eq:I:17:8} v'=\frac{v-u}{1-uv}. \end{equation} Now let us calculate the new energy $E'$, the energy as the fellow in the space ship would see it. He would use the same rest mass, of course, but he would use $v'$ for the velocity. What we have to do is square $v'$, subtract it from one, take the square root, and take the reciprocal: \begin{equation*} \begin{aligned} v'^2&=\frac{v^2-2uv+u^2}{1-2uv+u^2v^2},\\[1.5ex] 1-v'^2&=\frac{1-2uv+u^2v^2-v^2+2uv-u^2}{1-2uv+u^2v^2},\\[1ex] &=\frac{1-v^2-u^2+u^2v^2}{1-2uv+u^2v^2},\\[1.5ex] &=\frac{(1-v^2)(1-u^2)}{(1-uv)^2}. \end{aligned} \end{equation*} Therefore \begin{equation} \label{Eq:I:17:9} \frac{1}{\sqrt{1-v'^2}}=\frac{1-uv}{\sqrt{1-v^2}\sqrt{1-u^2}}. \end{equation} The energy $E'$ is then simply $m_0$ times the above expression. But we want to express the energy in terms of the unprimed energy and momentum, and we note that \begin{equation*} E'=\frac{m_0-m_0uv}{\sqrt{1-v^2}\sqrt{1-u^2}}= \frac{(m_0/\sqrt{1-v^2})-(m_0v/\sqrt{1-v^2})u}{\sqrt{1-u^2}}, \end{equation*}
\begin{align*} E'&=\frac{m_0-m_0uv}{\sqrt{1-v^2}\sqrt{1-u^2}}\\[2ex] &=\frac{(m_0/\sqrt{1-v^2})-(m_0v/\sqrt{1-v^2})u}{\sqrt{1-u^2}}, \end{align*} or \begin{equation} \label{Eq:I:17:10} E'=\frac{E-up_x}{\sqrt{1-u^2}}, \end{equation} which we recognize as being exactly of the same form as \begin{equation*} t'=\frac{t-ux}{\sqrt{1-u^2}}. \end{equation*} Next we must find the new momentum $p_x'$. This is just the energy $E'$ times $v'$, and is also simply expressed in terms of $E$ and $p$: \begin{equation*} p_x'=E'v'=\frac{m_0(1-uv)}{\sqrt{1-v^2}\sqrt{1-u^2}}\cdot \frac{v-u}{(1-uv)}= \frac{m_0v-m_0u}{\sqrt{1-v^2}\sqrt{1-u^2}}. \end{equation*}
\begin{align*} p_x'=E'v'&=\frac{m_0(1-uv)}{\sqrt{1-v^2}\sqrt{1-u^2}}\cdot \frac{v-u}{(1-uv)}\\[2ex] &=\frac{m_0v-m_0u}{\sqrt{1-v^2}\sqrt{1-u^2}}. \end{align*} Thus \begin{equation} \label{Eq:I:17:11} p_x'=\frac{p_x-uE}{\sqrt{1-u^2}}, \end{equation} which we recognize as being of precisely the same form as \begin{equation*} x'=\frac{x-ut}{\sqrt{1-u^2}}. \end{equation*} Thus the transformations for the new energy and momentum in terms of the old energy and momentum are exactly the same as the transformations for $t'$ in terms of $t$ and $x$, and $x'$ in terms of $x$ and $t$: all we have to do is, every time we see $t$ in (17.4) substitute $E$, and every time we see $x$ substitute $p_x$, and then the equations (17.4) will become the same as Eqs. (17.10) and (17.11). This would imply, if everything works right, an additional rule that $p_y' = p_y$ and that $p_z' = p_z$. To prove this would require our going back and studying the case of motion up and down. Actually, we did study the case of motion up and down in the last chapter. We analyzed a complicated collision and we noticed that, in fact, the transverse momentum is not changed when viewed from a moving system; so we have already verified that $p_y' = p_y$ and $p_z' = p_z$. The complete transformation, then, is \begin{equation} \begin{aligned} p_x'&=\frac{p_x-uE}{\sqrt{1-u^2}},\\ p_y'&=p_y,\\[1ex] p_z'&=p_z,\\ E'&=\frac{E-up_x}{\sqrt{1-u^2}}. \end{aligned} \label{Eq:I:17:12} \end{equation} In these transformations, therefore, we have discovered four quantities which transform like $x$, $y$, $z$, and $t$, and which we call the four-vector momentum. Since the momentum is a four-vector, it can be represented on a space-time diagram of a moving particle as an “arrow” tangent to the path, as shown in Fig. 17–4. This arrow has a time component equal to the energy, and its space components represent its three-vector momentum; this arrow is more “real” than either the energy or the momentum, because those just depend on how we look at the diagram. |
|
1 | 17 | Space-Time | 5 | Four-vector algebra | The notation for four-vectors is different than it is for three-vectors. In the case of three-vectors, if we were to talk about the ordinary three-vector momentum we would write it $\FLPp$. If we wanted to be more specific, we could say it has three components which are, for the axes in question, $p_x$, $p_y$, and $p_z$, or we could simply refer to a general component as $p_i$, and say that $i$ could either be $x$, $y$, or $z$, and that these are the three components; that is, imagine that $i$ is any one of three directions, $x$, $y$, or $z$. The notation that we use for four-vectors is analogous to this: we write $p_\mu$ for the four-vector, and $\mu$ stands for the four possible directions $t$, $x$, $y$, or $z$. We could, of course, use any notation we want; do not laugh at notations; invent them, they are powerful. In fact, mathematics is, to a large extent, invention of better notations. The whole idea of a four-vector, in fact, is an improvement in notation so that the transformations can be remembered easily. $A_\mu$, then, is a general four-vector, but for the special case of momentum, the $p_t$ is identified as the energy, $p_x$ is the momentum in the $x$-direction, $p_y$ is that in the $y$-direction, and $p_z$ is that in the $z$-direction. To add four-vectors, we add the corresponding components. If there is an equation among four-vectors, then the equation is true for each component. For instance, if the law of conservation of three-vector momentum is to be true in particle collisions, i.e., if the sum of the momenta for a large number of interacting or colliding particles is to be a constant, that must mean that the sums of all momenta in the $x$-direction, in the $y$-direction, and in the $z$-direction, for all the particles, must each be constant. This law alone would be impossible in relativity because it is incomplete; it is like talking about only two of the components of a three-vector. It is incomplete because if we rotate the axes, we mix the various components, so we must include all three components in our law. Thus, in relativity, we must complete the law of conservation of momentum by extending it to include the time component. This is absolutely necessary to go with the other three, or there cannot be relativistic invariance. The conservation of energy is the fourth equation which goes with the conservation of momentum to make a valid four-vector relationship in the geometry of space and time. Thus the law of conservation of energy and momentum in four-dimensional notation is \begin{equation} \label{Eq:I:17:13} \sum_{\substack{\text{particles}\\\text{in}}}p_\mu= \sum_{\substack{\text{particles}\\\text{out}}}p_\mu \end{equation} or, in a slightly different notation \begin{equation} \label{Eq:I:17:14} \sum_ip_{i\mu}=\sum_jp_{j\mu}, \end{equation} where $i = 1$, $2$, … refers to the particles going into the collision, $j= 1$, $2$, … refers to the particles coming out of the collision, and $\mu = x$, $y$, $z$, or $t$. You say, “In which axes?” It makes no difference. The law is true for each component, using any axes. In vector analysis we discussed one other thing, the dot product of two vectors. Let us now consider the corresponding thing in space-time. In ordinary rotation we discovered there was an unchanged quantity $x^2 + y^2 + z^2$. In four dimensions, we find that the corresponding quantity is $t^2 - x^2 - y^2 - z^2$ (Eq. 17.3). How can we write that? One way would be to write some kind of four-dimensional thing with a square dot between, like $A_\mu \boxdot B_\mu$; one of the notations which is actually used is \begin{equation} \label{Eq:I:17:15} \sideset{}{'}\sum_\mu A_\mu A_\mu=A_t^2-A_x^2-A_y^2-A_z^2. \end{equation} The prime on $\sum$ means that the first term, the “time” term, is positive, but the other three terms have minus signs. This quantity, then, will be the same in any coordinate system, and we may call it the square of the length of the four-vector. For instance, what is the square of the length of the four-vector momentum of a single particle? This will be equal to $p_t^2 - p_x^2 - p_y^2 - p_z^2$ or, in other words, $E^2 - p^2$, because we know that $p_t$ is $E$. What is $E^2 - p^2$? It must be something which is the same in every coordinate system. In particular, it must be the same for a coordinate system which is moving right along with the particle, in which the particle is standing still. If the particle is standing still, it would have no momentum. So in that coordinate system, it is purely its energy, which is the same as its rest mass. Thus $E^2 - p^2 = m_0^2$. So we see that the square of the length of this vector, the four-vector momentum, is equal to $m_0^2$. From the square of a vector, we can go on to invent the “dot product,” or the product which is a scalar: if $a_\mu$ is one four-vector and $b_\mu$ is another four-vector, then the scalar product is \begin{equation} \label{Eq:I:17:16} \sideset{}{'}\sum a_\mu b_\mu=a_tb_t-a_xb_x-a_yb_y-a_zb_z. \end{equation} It is the same in all coordinate systems. Finally, we shall mention certain things whose rest mass $m_0$ is zero. A photon of light, for example. A photon is like a particle, in that it carries an energy and a momentum. The energy of a photon is a certain constant, called Planck’s constant, times the frequency of the photon: $E = h\nu$. Such a photon also carries a momentum, and the momentum of a photon (or of any other particle, in fact) is $h$ divided by the wavelength: $p = h/\lambda$. But, for a photon, there is a definite relationship between the frequency and the wavelength: $\nu= c/\lambda$. (The number of waves per second, times the wavelength of each, is the distance that the light goes in one second, which, of course, is $c$.) Thus we see immediately that the energy of a photon must be the momentum times $c$, or if $c = 1$, the energy and momentum are equal. That is to say, the rest mass is zero. Let us look at that again; that is quite curious. If it is a particle of zero rest mass, what happens when it stops? It never stops! It always goes at the speed $c$. The usual formula for energy is $m_0/\sqrt{1 - v^2}$. Now can we say that $m_0 = 0$ and $v = 1$, so the energy is $0$? We cannot say that it is zero; the photon really can (and does) have energy even though it has no rest mass, but this it possesses by perpetually going at the speed of light! We also know that the momentum of any particle is equal to its total energy times its velocity: if $c = 1$, $p = vE$ or, in ordinary units, $p = vE/c^2$. For any particle moving at the speed of light, $p = E$ if $c = 1$. The formulas for the energy of a photon as seen from a moving system are, of course, given by Eq. (17.12), but for the momentum we must substitute the energy divided by $c$ (or by $1$ in this case). The different energies after transformation means that there are different frequencies. This is called the Doppler effect, and one can calculate it easily from Eq. (17.12), using also $E = p$ and $E = h\nu$. As Minkowski said, “Space of itself, and time of itself will sink into mere shadows, and only a kind of union between them shall survive.” |
|
1 | 18 | Rotation in Two Dimensions | 1 | The center of mass | In the previous chapters we have been studying the mechanics of points, or small particles whose internal structure does not concern us. For the next few chapters we shall study the application of Newton’s laws to more complicated things. When the world becomes more complicated, it also becomes more interesting, and we shall find that the phenomena associated with the mechanics of a more complex object than just a point are really quite striking. Of course these phenomena involve nothing but combinations of Newton’s laws, but it is sometimes hard to believe that only $F = ma$ is at work. The more complicated objects we deal with can be of several kinds: water flowing, galaxies whirling, and so on. The simplest “complicated” object to analyze, at the start, is what we call a rigid body, a solid object that is turning as it moves about. However, even such a simple object may have a most complex motion, and we shall therefore first consider the simplest aspects of such motion, in which an extended body rotates about a fixed axis. A given point on such a body then moves in a plane perpendicular to this axis. Such rotation of a body about a fixed axis is called plane rotation or rotation in two dimensions. We shall later generalize the results to three dimensions, but in doing so we shall find that, unlike the case of ordinary particle mechanics, rotations are subtle and hard to understand unless we first get a solid grounding in two dimensions. The first interesting theorem concerning the motion of complicated objects can be observed at work if we throw an object made of a lot of blocks and spokes, held together by strings, into the air. Of course we know it goes in a parabola, because we studied that for a particle. But now our object is not a particle; it wobbles and it jiggles, and so on. It does go in a parabola though; one can see that. What goes in a parabola? Certainly not the point on the corner of the block, because that is jiggling about; neither is it the end of the wooden stick, or the middle of the wooden stick, or the middle of the block. But something goes in a parabola, there is an effective “center” which moves in a parabola. So our first theorem about complicated objects is to demonstrate that there is a mean position which is mathematically definable, but not necessarily a point of the material itself, which goes in a parabola. That is called the theorem of the center of the mass, and the proof of it is as follows. We may consider any object as being made of lots of little particles, the atoms, with various forces among them. Let $i$ represent an index which defines one of the particles. (There are millions of them, so $i$ goes to $10^{23}$, or something.) Then the force on the $i$th particle is, of course, the mass times the acceleration of that particle: \begin{equation} \label{Eq:I:18:1} \FLPF_i=m_i(d^2\FLPr_i/dt^2). \end{equation} In the next few chapters our moving objects will be ones in which all the parts are moving at speeds very much slower than the speed of light, and we shall use the nonrelativistic approximation for all quantities. In these circumstances the mass is constant, so that \begin{equation} \label{Eq:I:18:2} \FLPF_i=d^2(m_i\FLPr_i)/dt^2. \end{equation} If we now add the force on all the particles, that is, if we take the sum of all the $\FLPF_i$’s for all the different indexes, we get the total force, $\FLPF$. On the other side of the equation, we get the same thing as though we added before the differentiation: \begin{equation} \label{Eq:I:18:3} \sum_i\FLPF_i=\FLPF=\frac{d^2(\sum_im_i\FLPr_i)}{dt^2}. \end{equation} Therefore the total force is the second derivative of the masses times their positions, added together. Now the total force on all the particles is the same as the external force. Why? Although there are all kinds of forces on the particles because of the strings, the wigglings, the pullings and pushings, and the atomic forces, and who knows what, and we have to add all these together, we are rescued by Newton’s Third Law. Between any two particles the action and reaction are equal, so that when we add all the equations together, if any two particles have forces between them it cancels out in the sum; therefore the net result is only those forces which arise from other particles which are not included in whatever object we decide to sum over. So if Eq. (18.3) is the sum over a certain number of the particles, which together are called “the object,” then the external force on the total object is equal to the sum of all the forces on all its constituent particles. Now it would be nice if we could write Eq. (18.3) as the total mass times some acceleration. We can. Let us say $M$ is the sum of all the masses, i.e., the total mass. Then if we define a certain vector $\FLPR$ to be \begin{equation} \label{Eq:I:18:4} \FLPR=\sum_im_i\FLPr_i/M, \end{equation} Eq. (18.3) will be simply \begin{equation} \label{Eq:I:18:5} \FLPF=d^2(M\FLPR)/dt^2=M(d^2\FLPR/dt^2), \end{equation} since $M$ is a constant. Thus we find that the external force is the total mass times the acceleration of an imaginary point whose location is $\FLPR$. This point is called the center of mass of the body. It is a point somewhere in the “middle” of the object, a kind of average $\FLPr$ in which the different $\FLPr_i$’s have weights or importances proportional to the masses. We shall discuss this important theorem in more detail in a later chapter, and we shall therefore limit our remarks to two points: First, if the external forces are zero, if the object were floating in empty space, it might whirl, and jiggle, and twist, and do all kinds of things. But the center of mass, this artificially invented, calculated position, somewhere in the middle, will move with a constant velocity. In particular, if it is initially at rest, it will stay at rest. So if we have some kind of a box, perhaps a space ship, with people in it, and we calculate the location of the center of mass and find it is standing still, then the center of mass will continue to stand still if no external forces are acting on the box. Of course, the space ship may move a little in space, but that is because the people are walking back and forth inside; when one walks toward the front, the ship goes toward the back so as to keep the average position of all the masses in exactly the same place. Is rocket propulsion therefore absolutely impossible because one cannot move the center of mass? No; but of course we find that to propel an interesting part of the rocket, an uninteresting part must be thrown away. In other words, if we start with a rocket at zero velocity and we spit some gas out the back end, then this little blob of gas goes one way as the rocket ship goes the other, but the center of mass is still exactly where it was before. So we simply move the part that we are interested in against the part we are not interested in. The second point concerning the center of mass, which is the reason we introduced it into our discussion at this time, is that it may be treated separately from the “internal” motions of an object, and may therefore be ignored in our discussion of rotation. |
|
1 | 18 | Rotation in Two Dimensions | 2 | Rotation of a rigid body | Now let us discuss rotations. Of course an ordinary object does not simply rotate, it wobbles, shakes, and bends, so to simplify matters we shall discuss the motion of a nonexistent ideal object which we call a rigid body. This means an object in which the forces between the atoms are so strong, and of such character, that the little forces that are needed to move it do not bend it. Its shape stays essentially the same as it moves about. If we wish to study the motion of such a body, and agree to ignore the motion of its center of mass, there is only one thing left for it to do, and that is to turn. We have to describe that. How? Suppose there is some line in the body which stays put (perhaps it includes the center of mass and perhaps not), and the body is rotating about this particular line as an axis. How do we define the rotation? That is easy enough, for if we mark a point somewhere on the object, anywhere except on the axis, we can always tell exactly where the object is, if we only know where this point has gone to. The only thing needed to describe the position of that point is an angle. So rotation consists of a study of the variations of the angle with time. In order to study rotation, we observe the angle through which a body has turned. Of course, we are not referring to any particular angle inside the object itself; it is not that we draw some angle on the object. We are talking about the angular change of the position of the whole thing, from one time to another. First, let us study the kinematics of rotations. The angle will change with time, and just as we talked about position and velocity in one dimension, we may talk about angular position and angular velocity in plane rotation. In fact, there is a very interesting relationship between rotation in two dimensions and one-dimensional displacement, in which almost every quantity has its analog. First, we have the angle $\theta$ which defines how far the body has gone around; this replaces the distance $s$, which defines how far it has gone along. In the same manner, we have a velocity of turning, $\omega= d\theta/dt$, which tells us how much the angle changes in a second, just as $v = ds/dt$ describes how fast a thing moves, or how far it moves in a second. If the angle is measured in radians, then the angular velocity $\omega$ will be so and so many radians per second. The greater the angular velocity, the faster the object is turning, the faster the angle changes. We can go on: we can differentiate the angular velocity with respect to time, and we can call $\alpha =$ $d\omega/dt =$ $d^2\theta/dt^2$ the angular acceleration. That would be the analog of the ordinary acceleration. Now of course we shall have to relate the dynamics of rotation to the laws of dynamics of the particles of which the object is made, so we must find out how a particular particle moves when the angular velocity is such and such. To do this, let us take a certain particle which is located at a distance $r$ from the axis and say it is in a certain location $P(x, y)$ at a given instant, in the usual manner (Fig. 18–1). If at a moment $\Delta t$ later the angle of the whole object has turned through $\Delta\theta$, then this particle is carried with it. It is at the same radius away from $O$ as it was before, but is carried to $Q$. The first thing we would like to know is how much the distance $x$ changes and how much the distance $y$ changes. If $OP$ is called $r$, then the length $PQ$ is $r\,\Delta\theta$, because of the way angles are defined. The change in $x$, then, is simply the projection of $r\,\Delta\theta$ in the $x$-direction: \begin{equation} \label{Eq:I:18:6} \Delta x=-PQ\sin\theta=-r\,\Delta\theta\cdot(y/r)=-y\,\Delta\theta. \end{equation} Similarly, \begin{equation} \label{Eq:I:18:7} \Delta y=+x\,\Delta\theta. \end{equation} If the object is turning with a given angular velocity $\omega$, we find, by dividing both sides of (18.6) and (18.7) by $\Delta t$, that the velocity of the particle is \begin{equation} \label{Eq:I:18:8} v_x=-\omega y\quad \text{and}\quad v_y=+\omega x. \end{equation} Of course if we want to find the magnitude of the velocity, we just write \begin{equation} \label{Eq:I:18:9} v=\sqrt{v_x^2+v_y^2}=\sqrt{\omega^2y^2+\omega^2x^2}= \omega\sqrt{x^2+y^2}=\omega r. \end{equation}
\begin{equation} \begin{aligned} v=\sqrt{v_x^2+v_y^2}&=\sqrt{\omega^2y^2+\omega^2x^2}\\[1ex] &=\omega\sqrt{x^2+y^2}=\omega r. \end{aligned} \label{Eq:I:18:9} \end{equation} It should not be mysterious that the value of the magnitude of this velocity is $\omega r$; in fact, it should be self-evident, because the distance that it moves is $r\,\Delta\theta$ and the distance it moves per second is $r\,\Delta\theta/\Delta t$, or $r\omega$. Let us now move on to consider the dynamics of rotation. Here a new concept, force, must be introduced. Let us inquire whether we can invent something which we shall call the torque (L. torquere, to twist) which bears the same relationship to rotation as force does to linear movement. A force is the thing that is needed to make linear motion, and the thing that makes something rotate is a “rotary force” or a “twisting force,” i.e., a torque. Qualitatively, a torque is a “twist”; what is a torque quantitatively? We shall get to the theory of torques quantitatively by studying the work done in turning an object, for one very nice way of defining a force is to say how much work it does when it acts through a given displacement. We are going to try to maintain the analogy between linear and angular quantities by equating the work that we do when we turn something a little bit when there are forces acting on it, to the torque times the angle it turns through. In other words, the definition of the torque is going to be so arranged that the theorem of work has an absolute analog: force times distance is work, and torque times angle is going to be work. That tells us what torque is. Consider, for instance, a rigid body of some kind with various forces acting on it, and an axis about which the body rotates. Let us at first concentrate on one force and suppose that this force is applied at a certain point $(x, y)$. How much work would be done if we were to turn the object through a very small angle? That is easy. The work done is \begin{equation} \label{Eq:I:18:10} \Delta W=F_x\,\Delta x+F_y\,\Delta y. \end{equation} We need only to substitute Eqs. (18.6) and (18.7) for $\Delta x$ and $\Delta y$ to obtain \begin{equation} \label{Eq:I:18:11} \Delta W=(xF_y-yF_x)\Delta\theta. \end{equation} That is, the amount of work that we have done is, in fact, equal to the angle through which we have turned the object, multiplied by a strange-looking combination of the force and the distance. This “strange combination” is what we call the torque. So, defining the change in work as the torque times the angle, we now have the formula for torque in terms of the forces. (Obviously, torque is not a completely new idea independent of Newtonian mechanics—torque must have a definite definition in terms of the force.) When there are several forces acting, the work that is done is, of course, the sum of the works done by all the forces, so that $\Delta W$ will be a whole lot of terms, all added together, for all the forces, each of which is proportional, however, to $\Delta\theta$. We can take the $\Delta\theta$ outside and therefore can say that the change in the work is equal to the sum of all the torques due to all the different forces that are acting, times $\Delta\theta$. This sum we might call the total torque, $\tau$. Thus torques add by the ordinary laws of algebra, but we shall later see that this is only because we are working in a plane. It is like one-dimensional kinematics, where the forces simply add algebraically, but only because they are all in the same direction. It is more complicated in three dimensions. Thus, for two-dimensional rotation, \begin{equation} \label{Eq:I:18:12} \tau_i=x_iF_{yi}-y_iF_{xi} \end{equation} and \begin{equation} \label{Eq:I:18:13} \tau=\sum\tau_i. \end{equation} It must be emphasized that the torque is about a given axis. If a different axis is chosen, so that all the $x_i$ and $y_i$ are changed, the value of the torque is (usually) changed too. Now we pause briefly to note that our foregoing introduction of torque, through the idea of work, gives us a most important result for an object in equilibrium: if all the forces on an object are in balance both for translation and rotation, not only is the net force zero, but the total of all the torques is also zero, because if an object is in equilibrium, no work is done by the forces for a small displacement. Therefore, since $\Delta W = \tau\,\Delta\theta=0$, the sum of all the torques must be zero. So there are two conditions for equilibrium: that the sum of the forces is zero, and that the sum of the torques is zero. Prove that it suffices to be sure that the sum of torques about any one axis (in two dimensions) is zero. Now let us consider a single force, and try to figure out, geometrically, what this strange thing $xF_y - yF_x$ amounts to. In Fig. 18–2 we see a force $\FLPF$ acting at a point $\FLPr$. When the object has rotated through a small angle $\Delta\theta$, the work done, of course, is the component of force in the direction of the displacement times the displacement. In other words, it is only the tangential component of the force that counts, and this must be multiplied by the distance $r\,\Delta\theta$. Therefore we see that the torque is also equal to the tangential component of force (perpendicular to the radius) times the radius. That makes sense in terms of our ordinary idea of the torque, because if the force were completely radial, it would not put any “twist” on the body; it is evident that the twisting effect should involve only the part of the force which is not pulling out from the center, and that means the tangential component. Furthermore, it is clear that a given force is more effective on a long arm than near the axis. In fact, if we take the case where we push right on the axis, we are not twisting at all! So it makes sense that the amount of twist, or torque, is proportional both to the radial distance and to the tangential component of the force. There is still a third formula for the torque which is very interesting. We have just seen that the torque is the force times the radius times the sine of the angle $\alpha$, in Fig. 18–2. But if we extend the line of action of the force and draw the line $OS$, the perpendicular distance to the line of action of the force (the lever arm of the force) we notice that this lever arm is shorter than $r$ in just the same proportion as the tangential part of the force is less than the total force. Therefore the formula for the torque can also be written as the magnitude of the force times the length of the lever arm. The torque is also often called the moment of the force. The origin of this term is obscure, but it may be related to the fact that “moment” is derived from the Latin movimentum, and that the capability of a force to move an object (using the force on a lever or crowbar) increases with the length of the lever arm. In mathematics “moment” means weighted by how far away it is from an axis. |
|
1 | 18 | Rotation in Two Dimensions | 3 | Angular momentum | Although we have so far considered only the special case of a rigid body, the properties of torques and their mathematical relationships are interesting also even when an object is not rigid. In fact, we can prove a very remarkable theorem: just as external force is the rate of change of a quantity $p$, which we call the total momentum of a collection of particles, so the external torque is the rate of change of a quantity $L$ which we call the angular momentum of the group of particles. To prove this, we shall suppose that there is a system of particles on which there are some forces acting and find out what happens to the system as a result of the torques due to these forces. First, of course, we should consider just one particle. In Fig. 18–3 is one particle of mass $m$, and an axis $O$; the particle is not necessarily rotating in a circle about $O$, it may be moving in an ellipse, like a planet going around the sun, or in some other curve. It is moving somehow, and there are forces on it, and it accelerates according to the usual formula that the $x$-component of force is the mass times the $x$-component of acceleration, etc. But let us see what the torque does. The torque equals $xF_y - yF_x$, and the force in the $x$- or $y$-direction is the mass times the acceleration in the $x$- or $y$-direction: \begin{align} \tau&=xF_y-yF_x\notag\\[.5ex] \label{Eq:I:18:14} &=xm(d^2y/dt^2)-ym(d^2x/dt^2). \end{align} Now, although this does not appear to be the derivative of any simple quantity, it is in fact the derivative of the quantity $xm(dy/dt) - ym(dx/dt)$: \begin{equation} \begin{aligned} \ddt{}{t}\biggl[xm&\biggl(\ddt{y}{t}\biggr) -ym\biggl(\ddt{x}{t}\biggr)\biggr]= xm\biggl(\frac{d^2y}{dt^2}\biggr)+\biggl(\ddt{x}{t}\biggr) m\biggl(\ddt{y}{t}\biggr)\\[1.25ex] &-\,ym\biggl(\frac{d^2x}{dt^2}\biggr)-\biggl(\ddt{y}{t}\biggr)m \biggl(\ddt{x}{t}\biggr)= xm\biggl(\frac{d^2y}{dt^2}\biggr)- ym\biggl(\frac{d^2x}{dt^2}\biggr). \end{aligned} \label{Eq:I:18:15} \end{equation}
\begin{equation} \begin{aligned} \ddt{}{t}\biggl[&xm\biggl(\ddt{y}{t}\biggr)-ym\biggl(\ddt{x}{t}\biggr)\biggr]\\[1.5ex] =\quad&xm\biggl(\frac{d^2y}{dt^2}\biggr)+\biggl(\ddt{x}{t}\biggr)m\biggl(\ddt{y}{t}\biggr)\\[1ex] -\;&ym\biggl(\frac{d^2x}{dt^2}\biggr)-\biggl(\ddt{y}{t}\biggr)m\biggl(\ddt{x}{t}\biggr)\\[1.5ex] =\quad&xm\biggl(\frac{d^2y}{dt^2}\biggr)-ym\biggl(\frac{d^2x}{dt^2}\biggr). \end{aligned} \label{Eq:I:18:15} \end{equation} So it is true that the torque is the rate of change of something with time! So we pay attention to the “something,” we give it a name: we call it $L$, the angular momentum: \begin{align} L&=xm(dy/dt)-ym(dx/dt)\notag\\[.5ex] \label{Eq:I:18:16} &=xp_y-yp_x. \end{align} Although our present discussion is nonrelativistic, the second form for $L$ given above is relativistically correct. So we have found that there is also a rotational analog for the momentum, and that this analog, the angular momentum, is given by an expression in terms of the components of linear momentum that is just like the formula for torque in terms of the force components! Thus, if we want to know the angular momentum of a particle about an axis, we take only the component of the momentum that is tangential, and multiply it by the radius. In other words, what counts for angular momentum is not how fast it is going away from the origin, but how much it is going around the origin. Only the tangential part of the momentum counts for angular momentum. Furthermore, the farther out the line of the momentum extends, the greater the angular momentum. And also, because the geometrical facts are the same whether the quantity is labeled $p$ or $F$, it is true that there is a lever arm (not the same as the lever arm of the force on the particle!) which is obtained by extending the line of the momentum and finding the perpendicular distance to the axis. Thus the angular momentum is the magnitude of the momentum times the momentum lever arm. So we have three formulas for angular momentum, just as we have three formulas for the torque: \begin{align} L&=xp_y-yp_x\notag\\[.5ex] &=rp_{\text{tang}}\notag\\ \label{Eq:I:18:17} &=p\cdot\text{lever arm}. \end{align} Like torque, angular momentum depends upon the position of the axis about which it is to be calculated. Before proceeding to a treatment of more than one particle, let us apply the above results to a planet going around the sun. In which direction is the force? The force is toward the sun. What, then, is the torque on the object? Of course, this depends upon where we take the axis, but we get a very simple result if we take it at the sun itself, for the torque is the force times the lever arm, or the component of force perpendicular to $r$, times $r$. But there is no tangential force, so there is no torque about an axis at the sun! Therefore, the angular momentum of the planet going around the sun must remain constant. Let us see what that means. The tangential component of velocity, times the mass, times the radius, will be constant, because that is the angular momentum, and the rate of change of the angular momentum is the torque, and, in this problem, the torque is zero. Of course since the mass is also a constant, this means that the tangential velocity times the radius is a constant. But this is something we already knew for the motion of a planet. Suppose we consider a small amount of time $\Delta t$. How far will the planet move when it moves from $P$ to $Q$ (Fig. 18–3)? How much area will it sweep through? Disregarding the very tiny area $QQ'P$ compared with the much larger area $OPQ$, it is simply half the base $PQ$ times the height, $OR$. In other words, the area that is swept through in unit time will be equal to the velocity times the lever arm of the velocity (times one-half). Thus the rate of change of area is proportional to the angular momentum, which is constant. So Kepler’s law about equal areas in equal times is a word description of the statement of the law of conservation of angular momentum, when there is no torque produced by the force. |
|
1 | 18 | Rotation in Two Dimensions | 4 | Conservation of angular momentum | Now we shall go on to consider what happens when there is a large number of particles, when an object is made of many pieces with many forces acting between them and on them from the outside. Of course, we already know that, about any given fixed axis, the torque on the $i$th particle (which is the force on the $i$th particle times the lever arm of that force) is equal to the rate of change of the angular momentum of that particle, and that the angular momentum of the $i$th particle is its momentum times its momentum lever arm. Now suppose we add the torques $\tau_i$ for all the particles and call it the total torque $\tau$. Then this will be the rate of change of the sum of the angular momenta of all the particles $L_i$, and that defines a new quantity which we call the total angular momentum $L$. Just as the total momentum of an object is the sum of the momenta of all the parts, so the angular momentum is the sum of the angular momenta of all the parts. Then the rate of change of the total $L$ is the total torque: \begin{equation} \label{Eq:I:18:18} \tau=\sum\tau_i=\sum\ddt{L_i}{t}=\ddt{L}{t}. \end{equation} Now it might seem that the total torque is a complicated thing. There are all those internal forces and all the outside forces to be considered. But, if we take Newton’s law of action and reaction to say, not simply that the action and reaction are equal, but also that they are directed exactly oppositely along the same line (Newton may or may not actually have said this, but he tacitly assumed it), then the two torques on the reacting objects, due to their mutual interaction, will be equal and opposite because the lever arms for any axis are equal. Therefore the internal torques balance out pair by pair, and so we have the remarkable theorem that the rate of change of the total angular momentum about any axis is equal to the external torque about that axis! \begin{equation} \label{Eq:I:18:19} \tau=\sum\tau_i=\tau_{\text{ext}}=dL/dt. \end{equation} Thus we have a very powerful theorem concerning the motion of large collections of particles, which permits us to study the overall motion without having to look at the detailed machinery inside. This theorem is true for any collection of objects, whether they form a rigid body or not. One extremely important case of the above theorem is the law of conservation of angular momentum: if no external torques act upon a system of particles, the angular momentum remains constant. A special case of great importance is that of a rigid body, that is, an object of a definite shape that is just turning around. Consider an object that is fixed in its geometrical dimensions, and which is rotating about a fixed axis. Various parts of the object bear the same relationship to one another at all times. Now let us try to find the total angular momentum of this object. If the mass of one of its particles is $m_i$, and its position or location is at $(x_i, y_i)$, then the problem is to find the angular momentum of that particle, because the total angular momentum is the sum of the angular momenta of all such particles in the body. For an object going around in a circle, the angular momentum, of course, is the mass times the velocity times the distance from the axis, and the velocity is equal to the angular velocity times the distance from the axis: \begin{equation} \label{Eq:I:18:20} L_i=m_iv_ir_i=m_ir_i^2\omega, \end{equation} or, summing over all the particles $i$, we get \begin{equation} \label{Eq:I:18:21} L=I\omega, \end{equation} where \begin{equation} \label{Eq:I:18:22} I=\sum_im_ir_i^2. \end{equation} This is the analog of the law that the momentum is mass times velocity. Velocity is replaced by angular velocity, and we see that the mass is replaced by a new thing which we call the moment of inertia $I$, which is analogous to the mass. Equations (18.21) and (18.22) say that a body has inertia for turning which depends, not just on the masses, but on how far away they are from the axis. So, if we have two objects of the same mass, when we put the masses farther away from the axis, the inertia for turning will be higher. This is easily demonstrated by the apparatus shown in Fig. 18–4, where a weight $M$ is kept from falling very fast because it has to turn the large weighted rod. At first, the masses $m$ are close to the axis, and $M$ speeds up at a certain rate. But when we change the moment of inertia by putting the two masses $m$ much farther away from the axis, then we see that $M$ accelerates much less rapidly than it did before, because the body has much more inertia against turning. The moment of inertia is the inertia against turning, and is the sum of the contributions of all the masses, times their distances squared, from the axis. There is one important difference between mass and moment of inertia which is very dramatic. The mass of an object never changes, but its moment of inertia can be changed. If we stand on a frictionless rotatable stand with our arms outstretched, and hold some weights in our hands as we rotate slowly, we may change our moment of inertia by drawing our arms in, but our mass does not change. When we do this, all kinds of wonderful things happen, because of the law of the conservation of angular momentum: If the external torque is zero, then the angular momentum, the moment of inertia times omega, remains constant. Initially, we were rotating with a large moment of inertia $I_1$ at a low angular velocity $\omega_1$, and the angular momentum was $I_1\omega_1$. Then we changed our moment of inertia by pulling our arms in, say to a smaller value $I_2$. Then the product $I\omega$, which has to stay the same because the total angular momentum has to stay the same, was $I_2\omega_2$. So $I_1\omega_1 = I_2\omega_2$. That is, if we reduce the moment of inertia, we have to increase the angular velocity. |
|
1 | 19 | Center of Mass; Moment of Inertia | 1 | Properties of the center of mass | In the previous chapter we found that if a great many forces are acting on a complicated mass of particles, whether the particles comprise a rigid or a nonrigid body, or a cloud of stars, or anything else, and we find the sum of all the forces (that is, of course, the external forces, because the internal forces balance out), then if we consider the body as a whole, and say it has a total mass $M$, there is a certain point “inside” the body, called the center of mass, such that the net resulting external force produces an acceleration of this point, just as though the whole mass were concentrated there. Let us now discuss the center of mass in a little more detail. The location of the center of mass (abbreviated CM) is given by the equation \begin{equation} \label{Eq:I:19:1} \FLPR_{\text{CM}}=\frac{\sum m_i\FLPr_i}{\sum m_i}. \end{equation} This is, of course, a vector equation which is really three equations, one for each of the three directions. We shall consider only the $x$-direction, because if we can understand that one, we can understand the other two. What does $X_{\text{CM}} = \sum m_ix_i/\sum m_i$ mean? Suppose for a moment that the object is divided into little pieces, all of which have the same mass $m$; then the total mass is simply the number $N$ of pieces times the mass of one piece, say one gram, or any unit. Then this equation simply says that we add all the $x$’s, and then divide by the number of things that we have added: $X_{\text{CM}} =$ $m\sum x_i/mN =$ $\sum x_i/N$. In other words, $X_{\text{CM}}$ is the average of all the $x$’s, if the masses are equal. But suppose one of them were twice as heavy as the others. Then in the sum, that $x$ would come in twice. This is easy to understand, for we can think of this double mass as being split into two equal ones, just like the others; then in taking the average, of course, we have to count that $x$ twice because there are two masses there. Thus $X$ is the average position, in the $x$-direction, of all the masses, every mass being counted a number of times proportional to the mass, as though it were divided into “little grams.” From this it is easy to prove that $X$ must be somewhere between the largest and the smallest $x$, and, therefore lies inside the envelope including the entire body. It does not have to be in the material of the body, for the body could be a circle, like a hoop, and the center of mass is in the center of the hoop, not in the hoop itself. Of course, if an object is symmetrical in some way, for instance, a rectangle, so that it has a plane of symmetry, the center of mass lies somewhere on the plane of symmetry. In the case of a rectangle there are two planes, and that locates it uniquely. But if it is just any symmetrical object, then the center of gravity lies somewhere on the axis of symmetry, because in those circumstances there are as many positive as negative $x$’s. Another interesting proposition is the following very curious one. Suppose that we imagine an object to be made of two pieces, $A$ and $B$ (Fig. 19–1). Then the center of mass of the whole object can be calculated as follows. First, find the center of mass of piece $A$, and then of piece $B$. Also, find the total mass of each piece, $M_A$ and $M_B$. Then consider a new problem, in which a point mass $M_A$ is at the center of mass of object $A$, and another point mass $M_B$ is at the center of mass of object $B$. The center of mass of these two point masses is then the center of mass of the whole object. In other words, if the centers of mass of various parts of an object have been worked out, we do not have to start all over again to find the center of mass of the whole object; we just have to put the pieces together, treating each one as a point mass situated at the center of mass of that piece. Let us see why that is. Suppose that we wanted to calculate the center of mass of a complete object, some of whose particles are considered to be members of object $A$ and some members of object $B$. The total sum $\sum m_ix_i$ can then be split into two pieces—the sum $\sum_A m_ix_i$ for the $A$ object only, and the sum $\sum_B m_ix_i$ for object $B$ only. Now if we were computing the center of mass of object $A$ alone, we would have exactly the first of these sums, and we know that this by itself is $M_AX_A$, the total mass of all the particles in $A$ times the position of the center of mass of $A$, because that is the theorem of the center of mass, applied to object $A$. In the same manner, just by looking at object $B$, we get $M_BX_B$, and of course, adding the two yields $MX_{\text{CM}}$: \begin{align} MX_{\text{CM}} &=\sum_A m_ix_i+\sum_B m_ix_i\notag\\[.5ex] \label{Eq:I:19:2} &=M_AX_A+M_BX_B. \end{align} Now since $M$ is evidently the sum of $M_A$ and $M_B$, we see that Eq. (19.2) can be interpreted as a special example of the center of mass formula for two point objects, one of mass $M_A$ located at $X_A$ and the other of mass $M_B$ located at $X_B$. The theorem concerning the motion of the center of mass is very interesting, and has played an important part in the development of our understanding of physics. Suppose we assume that Newton’s law is right for the small component parts of a much larger object. Then this theorem shows that Newton’s law is also correct for the larger object, even if we do not study the details of the object, but only the total force acting on it and its mass. In other words, Newton’s law has the peculiar property that if it is right on a certain small scale, then it will be right on a larger scale. If we do not consider a baseball as a tremendously complex thing, made of myriads of interacting particles, but study only the motion of the center of mass and the external forces on the ball, we find $\FLPF= m\FLPa$, where $\FLPF$ is the external force on the baseball, $m$ is its mass, and $\FLPa$ is the acceleration of its center of mass. So $\FLPF = m\FLPa$ is a law which reproduces itself on a larger scale. (There ought to be a good word, out of the Greek, perhaps, to describe a law which reproduces the same law on a larger scale.) Of course, one might suspect that the first laws that would be discovered by human beings would be those that would reproduce themselves on a larger scale. Why? Because the actual scale of the fundamental gears and wheels of the universe are of atomic dimensions, which are so much finer than our observations that we are nowhere near that scale in our ordinary observations. So the first things that we would discover must be true for objects of no special size relative to an atomic scale. If the laws for small particles did not reproduce themselves on a larger scale, we would not discover those laws very easily. What about the reverse problem? Must the laws on a small scale be the same as those on a larger scale? Of course it is not necessarily so in nature, that at an atomic level the laws have to be the same as on a large scale. Suppose that the true laws of motion of atoms were given by some strange equation which does not have the property that when we go to a larger scale we reproduce the same law, but instead has the property that if we go to a larger scale, we can approximate it by a certain expression such that, if we extend that expression up and up, it keeps reproducing itself on a larger and larger scale. That is possible, and in fact that is the way it works. Newton’s laws are the “tail end” of the atomic laws, extrapolated to a very large size. The actual laws of motion of particles on a fine scale are very peculiar, but if we take large numbers of them and compound them, they approximate, but only approximate, Newton’s laws. Newton’s laws then permit us to go on to a higher and higher scale, and it still seems to be the same law. In fact, it becomes more and more accurate as the scale gets larger and larger. This self-reproducing factor of Newton’s laws is thus really not a fundamental feature of nature, but is an important historical feature. We would never discover the fundamental laws of the atomic particles at first observation because the first observations are much too crude. In fact, it turns out that the fundamental atomic laws, which we call quantum mechanics, are quite different from Newton’s laws, and are difficult to understand because all our direct experiences are with large-scale objects and the small-scale atoms behave like nothing we see on a large scale. So we cannot say, “An atom is just like a planet going around the sun,” or anything like that. It is like nothing we are familiar with because there is nothing like it. As we apply quantum mechanics to larger and larger things, the laws about the behavior of many atoms together do not reproduce themselves, but produce new laws, which are Newton’s laws, which then continue to reproduce themselves from, say, micro-microgram size, which still is billions and billions of atoms, on up to the size of the earth, and above. Let us now return to the center of mass. The center of mass is sometimes called the center of gravity, for the reason that, in many cases, gravity may be considered uniform. Let us suppose that we have small enough dimensions that the gravitational force is not only proportional to the mass, but is everywhere parallel to some fixed line. Then consider an object in which there are gravitational forces on each of its constituent masses. Let $m_i$ be the mass of one part. Then the gravitational force on that part is $m_i$ times $g$. Now the question is, where can we apply a single force to balance the gravitational force on the whole thing, so that the entire object, if it is a rigid body, will not turn? The answer is that this force must go through the center of mass, and we show this in the following way. In order that the body will not turn, the torque produced by all the forces must add up to zero, because if there is a torque, there is a change of angular momentum, and thus a rotation. So we must calculate the total of all the torques on all the particles, and see how much torque there is about any given axis; it should be zero if this axis is at the center of mass. Now, measuring $x$ horizontally and $y$ vertically, we know that the torques are the forces in the $y$-direction, times the lever arm $x$ (that is to say, the force times the lever arm around which we want to measure the torque). Now the total torque is the sum \begin{equation} \label{Eq:I:19:3} \tau=\sum m_igx_i=g\sum m_ix_i, \end{equation} so if the total torque is to be zero, the sum $\sum m_ix_i$ must be zero. But $\sum m_ix_i = MX_{\text{CM}}$, the total mass times the distance of the center of mass from the axis. Thus the $x$-distance of the center of mass from the axis is zero. Of course, we have checked the result only for the $x$-distance, but if we use the true center of mass the object will balance in any position, because if we turned it $90$ degrees, we would have $y$’s instead of $x$’s. In other words, when an object is supported at its center of mass, there is no torque on it because of a parallel gravitational field. In case the object is so large that the nonparallelism of the gravitational forces is significant, then the center where one must apply the balancing force is not simple to describe, and it departs slightly from the center of mass. That is why one must distinguish between the center of mass and the center of gravity. The fact that an object supported exactly at the center of mass will balance in all positions has another interesting consequence. If, instead of gravitation, we have a pseudo force due to acceleration, we may use exactly the same mathematical procedure to find the position to support it so that there are no torques produced by the inertial force of acceleration. Suppose that the object is held in some manner inside a box, and that the box, and everything contained in it, is accelerating. We know that, from the point of view of someone at rest relative to this accelerating box, there will be an effective force due to inertia. That is, to make the object go along with the box, we have to push on it to accelerate it, and this force is “balanced” by the “force of inertia,” which is a pseudo force equal to the mass times the acceleration of the box. To the man in the box, this is the same situation as if the object were in a uniform gravitational field whose “$g$” value is equal to the acceleration $a$. Thus the inertial force due to accelerating an object has no torque about the center of mass. This fact has a very interesting consequence. In an inertial frame that is not accelerating, the torque is always equal to the rate of change of the angular momentum. However, about an axis through the center of mass of an object which is accelerating, it is still true that the torque is equal to the rate of change of the angular momentum. Even if the center of mass is accelerating, we may still choose one special axis, namely, one passing through the center of mass, such that it will still be true that the torque is equal to the rate of change of angular momentum around that axis. Thus the theorem that torque equals the rate of change of angular momentum is true in two general cases: (1) a fixed axis in inertial space, (2) an axis through the center of mass, even though the object may be accelerating. |
|
1 | 19 | Center of Mass; Moment of Inertia | 2 | Locating the center of mass | The mathematical techniques for the calculation of centers of mass are in the province of a mathematics course, and such problems provide good exercise in integral calculus. After one has learned calculus, however, and wants to know how to locate centers of mass, it is nice to know certain tricks which can be used to do so. One such trick makes use of what is called the theorem of Pappus. It works like this: if we take any closed area in a plane and generate a solid by moving it through space such that each point is always moved perpendicular to the plane of the area, the resulting solid has a total volume equal to the area of the cross section times the distance that the center of mass moved! Certainly this is true if we move the area in a straight line perpendicular to itself, but if we move it in a circle or in some other curve, then it generates a rather peculiar volume. For a curved path, the outside goes around farther, and the inside goes around less, and these effects balance out. So if we want to locate the center of mass of a plane sheet of uniform density, we can remember that the volume generated by spinning it about an axis is the distance that the center of mass goes around, times the area of the sheet. For example, if we wish to find the center of mass of a right triangle of base $D$ and height $H$ (Fig. 19–2), we might solve the problem in the following way. Imagine an axis along $H$, and rotate the triangle about that axis through a full $360$ degrees. This generates a cone. The distance that the $x$-coordinate of the center of mass has moved is $2\pi x$. The area which is being moved is the area of the triangle, $\tfrac{1}{2}HD$. So the $x$-distance of the center of mass times the area of the triangle is the volume swept out, which is of course $\pi D^2H/3$. Thus $(2\pi x)(\tfrac{1}{2}HD) = \pi D^2H/3$, or $x = D/3$. In a similar manner, by rotating about the other axis, or by symmetry, we find $y = H/3$. In fact, the center of mass of any uniform triangular area is where the three medians, the lines from the vertices through the centers of the opposite sides, all meet. That point is $1/3$ of the way along each median. Clue: Slice the triangle up into a lot of little pieces, each parallel to a base. Note that the median line bisects every piece, and therefore the center of mass must lie on this line. Now let us try a more complicated figure. Suppose that it is desired to find the position of the center of mass of a uniform semicircular disc—a disc sliced in half. Where is the center of mass? For a full disc, it is at the center, of course, but a half-disc is more difficult. Let $r$ be the radius and $x$ be the distance of the center of mass from the straight edge of the disc. Spin it around this edge as axis to generate a sphere. Then the center of mass has gone around $2\pi x$, the area is $\pi r^2/2$ (because it is only half a circle). The volume generated is, of course, $4\pi r^3/3$, from which we find that \begin{equation*} (2\pi x)(\tfrac{1}{2}\pi r^2) = 4\pi r^3/3, \end{equation*} or \begin{equation*} x=4r/3\pi. \end{equation*} There is another theorem of Pappus which is a special case of the above one, and therefore equally true. Suppose that, instead of the solid semicircular disc, we have a semicircular piece of wire with uniform mass density along the wire, and we want to find its center of mass. In this case there is no mass in the interior, only on the wire. Then it turns out that the area which is swept by a plane curved line, when it moves as before, is the distance that the center of mass moves times the length of the line. (The line can be thought of as a very narrow area, and the previous theorem can be applied to it.) |
|
1 | 19 | Center of Mass; Moment of Inertia | 3 | Finding the moment of inertia | Now let us discuss the problem of finding the moments of inertia of various objects. The formula for the moment of inertia about the $z$-axis of an object is \begin{equation} I =\sum m_i(x_i^2+y_i^2)\notag \end{equation} or \begin{equation} \label{Eq:I:19:4} I =\int(x^2+y^2)\,dm=\int(x^2+y^2)\rho\,dV. \end{equation} That is, we must sum the masses, each one multiplied by the square of its distance $(x_i^2 + y_i^2)$ from the axis. Note that it is not the three-dimensional distance, only the two-dimensional distance squared, even for a three-dimensional object. For the most part, we shall restrict ourselves to two-dimensional objects, but the formula for rotation about the $z$-axis is just the same in three dimensions. As a simple example, consider a rod rotating about a perpendicular axis through one end (Fig. 19–3). Now we must sum all the masses times the $x$-distances squared (the $y$’s being all zero in this case). What we mean by “the sum,” of course, is the integral of $x^2$ times the little elements of mass. If we divide the rod into small elements of length $dx$, the corresponding elements of mass are proportional to $dx$, and if $dx$ were the length of the whole rod the mass would be $M$. Therefore \begin{equation} dm = M\,dx/L\notag \end{equation} and so \begin{equation} \label{Eq:I:19:5} I=\int_0^Lx^2\,\frac{M\,dx}{L} = \frac{M}{L}\int_0^L x^2\,dx = \frac{ML^2}{3}. \end{equation} The dimensions of moment of inertia are always mass times length squared, so all we really had to work out was the factor $1/3$. Now what is $I$ if the rotation axis is at the center of the rod? We could just do the integral over again, letting $x$ range from $-\tfrac{1}{2}L$ to $+\tfrac{1}{2}L$. But let us notice a few things about the moment of inertia. We can imagine the rod as two rods, each of mass $M/2$ and length $L/2$; the moments of inertia of the two small rods are equal, and are both given by the formula (19.5). Therefore the moment of inertia is \begin{equation} \label{Eq:I:19:6} I=\frac{2(M/2)(L/2)^2}{3}=\frac{ML^2}{12}. \end{equation} Thus it is much easier to turn a rod about its center, than to swing it around an end. Of course, we could go on to compute the moments of inertia of various other bodies of interest. However, while such computations provide a certain amount of important exercise in the calculus, they are not basically of interest to us as such. There is, however, an interesting theorem which is very useful. Suppose we have an object, and we want to find its moment of inertia around some axis. That means we want the inertia needed to carry it by rotation about that axis. Now if we support the object on pivots at the center of mass, so that the object does not turn as it rotates about the axis (because there is no torque on it from inertial effects, and therefore it will not turn when we start moving it), then the forces needed to swing it around are the same as though all the mass were concentrated at the center of mass, and the moment of inertia would be simply $I_1 = MR_{\text{CM}}^2$, where $R_{\text{CM}}$ is the distance from the axis to the center of mass. But of course that is not the right formula for the moment of inertia of an object which is really being rotated as it revolves, because not only is the center of it moving in a circle, which would contribute an amount $I_1$ to the moment of inertia, but also we must turn it about its center of mass. So it is not unreasonable that we must add to $I_1$ the moment of inertia $I_c$ about the center of mass. So it is a good guess that the total moment of inertia about any axis will be \begin{equation} \label{Eq:I:19:7} I=I_c+MR_{\text{CM}}^2. \end{equation} This theorem is called the parallel-axis theorem, and may be easily proved. The moment of inertia about any axis is the mass times the sum of the $x_i$’s and the $y_i$’s, each squared: $I=\sum (x_i^2 + y_i^2)m_i$. We shall concentrate on the $x$’s, but of course the $y$’s work the same way. Now $x$ is the distance of a particular point mass from the origin, but let us consider how it would look if we measured $x'$ from the CM, instead of $x$ from the origin. To get ready for this analysis, we write \begin{equation*} x_i=x_i'+X_{\text{CM}}. \end{equation*} Then we just square this to find \begin{equation*} x_i^2=x_i'^2+2X_{\text{CM}}x_i'+X_{\text{CM}}^2. \end{equation*} So, when this is multiplied by $m_i$ and summed over all $i$, what happens? Taking the constants outside the summation sign, we get \begin{equation*} I_x=\sum m_ix_i'^2+2X_{\text{CM}}\sum m_ix_i'+ X_{\text{CM}}^2\sum m_i. \end{equation*} The third sum is easy; it is just $MX_{\text{CM}}^2$. In the second sum there are two pieces, one of them is $\sum m_ix_i'$, which is the total mass times the $x'$-coordinate of the center of mass. But this contributes nothing, because $x'$ is measured from the center of mass, and in these axes the average position of all the particles, weighted by the masses, is zero. The first sum, of course, is the $x$ part of $I_c$. Thus we arrive at Eq. (19.7), just as we guessed. Let us check (19.7) for one example. Let us just see whether it works for the rod. For an axis through one end, the moment of inertia should be $ML^2/3$, for we calculated that. The center of mass of a rod, of course, is in the center of the rod, at a distance $L/2$. Therefore we should find that $ML^2/3 = ML^2/12 + M(L/2)^2$. Since one-quarter plus one-twelfth is one-third, we have made no fundamental error. Incidentally, we did not really need to use an integral to find the moment of inertia (19.5). If we simply assume that it is $ML^2$ times $\gamma$, an unknown coefficient, and then use the argument about the two halves to get $\tfrac{1}{4}\gamma$ for (19.6), then from our argument about transferring the axes we could prove that $\gamma = \tfrac{1}{4}\gamma + \tfrac{1}{4}$, so $\gamma$ must be $1/3$. There is always another way to do it! In applying the parallel-axis theorem, it is of course important to remember that the axis for $I_c$ must be parallel to the axis about which the moment of inertia is wanted. One further property of the moment of inertia is worth mentioning because it is often helpful in finding the moment of inertia of certain kinds of objects. This property is that if one has a plane figure and a set of coordinate axes with origin in the plane and $z$-axis perpendicular to the plane, then the moment of inertia of this figure about the $z$-axis is equal to the sum of the moments of inertia about the $x$- and $y$-axes. This is easily proved by noting that \begin{equation*} I_x=\sum m_i(y_i^2+z_i^2)=\sum m_iy_i^2 \end{equation*} (since $z_i = 0$). Similarly, \begin{equation*} I_y = \sum m_i(x_i^2 +z_i^2)=\sum m_ix_i^2, \end{equation*} but \begin{align*} I_z=\sum m_i(x_i^2+y_i^2)&=\sum m_ix_i^2 + \sum m_iy_i^2\\[1ex] &=I_x+I_y. \end{align*} As an example, the moment of inertia of a uniform rectangular plate of mass $M$, width $w$, and length $L$, about an axis perpendicular to the plate and through its center is simply \begin{equation*} I = M(w^2 + L^2)/12, \end{equation*} because its moment of inertia about an axis in its plane and parallel to its length is $Mw^2/12$, i.e., just as for a rod of length $w$, and the moment of inertia about the other axis in its plane is $ML^2/12$, just as for a rod of length $L$. To summarize, the moment of inertia of an object about a given axis, which we shall call the $z$-axis, has the following properties: The moments of inertia of a number of elementary shapes having uniform mass densities are given in Table 19–1, and the moments of inertia of some other objects, which may be deduced from Table 19–1, using the above properties, are given in Table 19–2. |
|
1 | 19 | Center of Mass; Moment of Inertia | 4 | Rotational kinetic energy | Now let us go on to discuss dynamics further. In the analogy between linear motion and angular motion that we discussed in Chapter 18, we used the work theorem, but we did not talk about kinetic energy. What is the kinetic energy of a rigid body, rotating about a certain axis with an angular velocity $\omega$? We can immediately guess the correct answer by using our analogies. The moment of inertia corresponds to the mass, angular velocity corresponds to velocity, and so the kinetic energy ought to be $\tfrac{1}{2}I\omega^2$, and indeed it is, as will now be demonstrated. Suppose the object is rotating about some axis so that each point has a velocity whose magnitude is $\omega r_i$, where $r_i$ is the radius from the particular point to the axis. Then if $m_i$ is the mass of that point, the total kinetic energy of the whole thing is just the sum of the kinetic energies of all of the little pieces: \begin{equation*} T=\tfrac{1}{2}\sum m_iv_i^2= \tfrac{1}{2}\sum m_i(r_i\omega)^2. \end{equation*} Now $\omega^2$ is a constant, the same for all points. Thus \begin{equation} \label{Eq:I:19:8} T=\tfrac{1}{2}\omega^2\sum m_ir_i^2=\tfrac{1}{2}I\omega^2. \end{equation} At the end of Chapter 18 we pointed out that there are some interesting phenomena associated with an object which is not rigid, but which changes from one rigid condition with a definite moment of inertia, to another rigid condition. Namely, in our example of the turntable, we had a certain moment of inertia $I_1$ with our arms stretched out, and a certain angular velocity $\omega_1$. When we pulled our arms in, we had a different moment of inertia, $I_2$, and a different angular velocity, $\omega_2$, but again we were “rigid.” The angular momentum remained constant, since there was no torque about the vertical axis of the turntable. This means that $I_1\omega_1 = I_2\omega_2$. Now what about the energy? That is an interesting question. With our arms pulled in, we turn faster, but our moment of inertia is less, and it looks as though the energies might be equal. But they are not, because what does balance is $I\omega$, not $I\omega^2$. So if we compare the kinetic energy before and after, the kinetic energy before is $\tfrac{1}{2}I_1\omega_1^2 = \tfrac{1}{2}L\omega_1$, where $L=$ $I_1\omega_1=$ $I_2\omega_2$ is the angular momentum. Afterward, by the same argument, we have $T = \tfrac{1}{2}L\omega_2$ and since $\omega_2 > \omega_1$ the kinetic energy of rotation is greater than it was before. So we had a certain energy when our arms were out, and when we pulled them in, we were turning faster and had more kinetic energy. What happened to the theorem of the conservation of energy? Somebody must have done some work. We did work! When did we do any work? When we move a weight horizontally, we do not do any work. If we hold a thing out and pull it in, we do not do any work. But that is when we are not rotating! When we are rotating, there is centrifugal force on the weights. They are trying to fly out, so when we are going around we have to pull the weights in against the centrifugal force. So, the work we do against the centrifugal force ought to agree with the difference in rotational energy, and of course it does. That is where the extra kinetic energy comes from. There is still another interesting feature which we can treat only descriptively, as a matter of general interest. This feature is a little more advanced, but is worth pointing out because it is quite curious and produces many interesting effects. Consider that turntable experiment again. Consider the body and the arms separately, from the point of view of the man who is rotating. After the weights are pulled in, the whole object is spinning faster, but observe, the central part of the body is not changed, yet it is turning faster after the event than before. So, if we were to draw a circle around the inner body, and consider only objects inside the circle, their angular momentum would change; they are going faster. Therefore there must be a torque exerted on the body while we pull in our arms. No torque can be exerted by the centrifugal force, because that is radial. So that means that among the forces that are developed in a rotating system, centrifugal force is not the entire story, there is another force. This other force is called Coriolis force, and it has the very strange property that when we move something in a rotating system, it seems to be pushed sidewise. Like the centrifugal force, it is an apparent force. But if we live in a system that is rotating, and move something radially, we find that we must also push it sidewise to move it radially. This sidewise push which we have to exert is what turned our body around. Now let us develop a formula to show how this Coriolis force really works. Suppose Moe is sitting on a carousel that appears to him to be stationary. But from the point of view of Joe, who is standing on the ground and who knows the right laws of mechanics, the carousel is going around. Suppose that we have drawn a radial line on the carousel, and that Moe is moving some mass radially along this line. We would like to demonstrate that a sidewise force is required to do that. We can do this by paying attention to the angular momentum of the mass. It is always going around with the same angular velocity $\omega$, so that the angular momentum is \begin{equation*} L=mv_{\text{tang}}r=m\omega r\cdot r=m\omega r^2. \end{equation*} So when the mass is close to the center, it has relatively little angular momentum, but if we move it to a new position farther out, if we increase $r$, $m$ has more angular momentum, so a torque must be exerted in order to move it along the radius. (To walk along the radius in a carousel, one has to lean over and push sidewise. Try it sometime.) The torque that is required is the rate of change of $L$ with time as $m$ moves along the radius. If $m$ moves only along the radius, omega stays constant, so that the torque is \begin{equation*} \tau=F_cr=\ddt{L}{t}=\ddt{(m\omega r^2)}{t}=2m\omega r\,\ddt{r}{t}, \end{equation*} where $F_c$ is the Coriolis force. What we really want to know is what sidewise force has to be exerted by Moe in order to move $m$ out at speed $v_r = dr/dt$. This is $F_c =$ $\tau/r =$ $2m\omega v_r$. Now that we have a formula for the Coriolis force, let us look at the situation a little more carefully, to see whether we can understand the origin of this force from a more elementary point of view. We note that the Coriolis force is the same at every radius, and is evidently present even at the origin! But it is especially easy to understand it at the origin, just by looking at what happens from the inertial system of Joe, who is standing on the ground. Figure 19–4 shows three successive views of $m$ just as it passes the origin at $t = 0$. Because of the rotation of the carousel, we see that $m$ does not move in a straight line, but in a curved path tangent to a diameter of the carousel where $r= 0$. In order for $m$ to go in a curve, there must be a force to accelerate it in absolute space. This is the Coriolis force. This is not the only case in which the Coriolis force occurs. We can also show that if an object is moving with constant speed around the circumference of a circle, there is also a Coriolis force. Why? Moe sees a velocity $v_M$ around the circle. On the other hand, Joe sees $m$ going around the circle with the velocity $v_J = v_M + \omega r$, because $m$ is also carried by the carousel. Therefore we know what the force really is, namely, the total centripetal force due to the velocity $v_J$, or $mv_J^2/r$; that is the actual force. Now from Moe’s point of view, this centripetal force has three pieces. We may write it all out as follows: \begin{equation*} F_r=-\frac{mv_J^2}{r}=-\frac{mv_M^2}{r}- 2mv_M\omega-m\omega^2r. \end{equation*} Now, $F_r$ is the force that Moe would see. Let us try to understand it. Would Moe appreciate the first term? “Yes,” he would say, “even if I were not turning, there would be a centripetal force if I were to run around a circle with velocity $v_M$.” This is simply the centripetal force that Moe would expect, having nothing to do with rotation. In addition, Moe is quite aware that there is another centripetal force that would act even on objects which are standing still on his carousel. This is the third term. But there is another term in addition to these, namely the second term, which is again $2m\omega v$. The Coriolis force $F_c$ was tangential when the velocity was radial, and now it is radial when the velocity is tangential. In fact, one expression has a minus sign relative to the other. The force is always in the same direction, relative to the velocity, no matter in which direction the velocity is. The force is at right angles to the velocity, and of magnitude $2m\omega v$. |
|
1 | 20 | Rotation in space | 1 | Torques in three dimensions | In this chapter we shall discuss one of the most remarkable and amusing consequences of mechanics, the behavior of a rotating wheel. In order to do this we must first extend the mathematical formulation of rotational motion, the principles of angular momentum, torque, and so on, to three-dimensional space. We shall not use these equations in all their generality and study all their consequences, because this would take many years, and we must soon turn to other subjects. In an introductory course we can present only the fundamental laws and apply them to a very few situations of special interest. First, we notice that if we have a rotation in three dimensions, whether of a rigid body or any other system, what we deduced for two dimensions is still right. That is, it is still true that $xF_y - yF_x$ is the torque “in the $xy$-plane,” or the torque “around the $z$-axis.” It also turns out that this torque is still equal to the rate of change of $xp_y - yp_x$, for if we go back over the derivation of Eq. (18.15) from Newton’s laws we see that we did not have to assume that the motion was in a plane; when we differentiate $xp_y - yp_x$, we get $xF_y - yF_x$, so this theorem is still right. The quantity $xp_y - yp_x$, then, we call the angular momentum belonging to the $xy$-plane, or the angular momentum about the $z$-axis. This being true, we can use any other pair of axes and get another equation. For instance, we can use the $yz$-plane, and it is clear from symmetry that if we just substitute $y$ for $x$ and $z$ for $y$, we would find $yF_z - zF_y$ for the torque and $yp_z - zp_y$ would be the angular momentum associated with the $yz$-plane. Of course we could have another plane, the $zx$-plane, and for this we would find $zF_x - xF_z = d/dt\,(zp_x - xp_z)$. That these three equations can be deduced for the motion of a single particle is quite clear. Furthermore, if we added such things as $xp_y - yp_x$ together for many particles and called it the total angular momentum, we would have three kinds for the three planes $xy$, $yz$, and $zx$, and if we did the same with the forces, we would talk about the torque in the planes $xy$, $yz$, and $zx$ also. Thus we would have laws that the external torque associated with any plane is equal to the rate of change of the angular momentum associated with that plane. This is just a generalization of what we wrote in two dimensions. But now one may say, “Ah, but there are more planes; after all, can we not take some other plane at some angle, and calculate the torque on that plane from the forces? Since we would have to write another set of equations for every such plane, we would have a lot of equations!” Interestingly enough, it turns out that if we were to work out the combination $x'F_{y'} - y'F_{x'}$ for another plane, measuring the $x'$, $F_{y'}$, etc., in that plane, the result can be written as some combination of the three expressions for the $xy$-, $yz$- and $zx$-planes. There is nothing new. In other words, if we know what the three torques in the $xy$-, $yz$-, and $zx$-planes are, then the torque in any other plane, and correspondingly the angular momentum also, can be written as some combination of these: six percent of one and ninety-two percent of another, and so on. This property we shall now analyze. Suppose that in the $xyz$-axes, Joe has worked out all his torques and his angular momenta in his planes. But Moe has axes $x',y',z'$ in some other direction. To make it a little easier, we shall suppose that only the $x$- and $y$-axes have been turned. Moe’s $x'$ and $y'$ are new, but his $z'$ happens to be the same. That is, he has new planes, let us say, for $yz$ and $zx$. He therefore has new torques and angular momenta which he would work out. For example, his torque in the $x'y'$-plane would be equal to $x'F_{y'} - y'F_{x'}$ and so forth. What we must now do is to find the relationship between the new torques and the old torques, so we will be able to make a connection from one set of axes to the other. Someone may say, “That looks just like what we did with vectors.” And indeed, that is exactly what we are intending to do. Then he may say, “Well, isn’t torque just a vector?” It does turn out to be a vector, but we do not know that right away without making an analysis. So in the following steps we shall make the analysis. We shall not discuss every step in detail, since we only want to illustrate how it works. The torques calculated by Joe are \begin{equation} \begin{alignedat}{6} &\tau_{xy}~&&=x&&F_y&&-y&&F_x&&,\\[.5ex] &\tau_{yz}~&&=y&&F_z&&-z&&F_y&&,\\[.5ex] &\tau_{zx}~&&=z&&F_x&&-x&&F_z&&. \end{alignedat} \label{Eq:I:20:1} \end{equation} We digress at this point to note that in such cases as this one may get the wrong sign for some quantity if the coordinates are not handled in the right way. Why not write $\tau_{yz}= zF_y - yF_z$? The problem arises from the fact that a coordinate system may be either “right-handed” or “left-handed.” Having chosen (arbitrarily) a sign for, say $\tau_{xy}$, then the correct expressions for the other two quantities may always be found by interchanging the letters $xyz$ in either order or Moe now calculates the torques in his system: \begin{equation} \begin{alignedat}{6} &\tau_{x'y'}~&&=x'&&F_{y'}&&-y'&&F_{x'}&&,\\[.5ex] &\tau_{y'z'}~&&=y'&&F_{z'}&&-z'&&F_{y'}&&,\\[.5ex] &\tau_{z'x'}~&&=z'&&F_{x'}&&-x'&&F_{z'}&&. \end{alignedat} \label{Eq:I:20:2} \end{equation} Now we suppose that one coordinate system is rotated by a fixed angle $\theta$, such that the $z$- and $z'$-axes are the same. (This angle $\theta$ has nothing to do with rotating objects or what is going on inside the coordinate system. It is merely the relationship between the axes used by one man and the axes used by the other, and is supposedly constant.) Thus the coordinates of the two systems are related by \begin{equation} \begin{alignedat}{4} &x'~&&=x&&\cos\theta+y&&\sin\theta,\\[.5ex] &y'~&&=y&&\cos\theta-x&&\sin\theta,\\[.5ex] &z'~&&=z&&. \end{alignedat} \label{Eq:I:20:3} \end{equation} Likewise, because force is a vector it transforms into the new system in the same way as do $x$, $y$, and $z$, since a thing is a vector if and only if the various components transform in the same way as $x$, $y$, and $z$: \begin{equation} \begin{alignedat}{4} &F_{x'}~&&=F_x&&\cos\theta+F_y&&\sin\theta,\\[.5ex] &F_{y'}~&&=F_y&&\cos\theta-F_x&&\sin\theta,\\[.5ex] &F_{z'}~&&=F_z&&. \end{alignedat} \label{Eq:I:20:4} \end{equation} Now we can find out how the torque transforms by merely substituting for $x'$, $y'$, and $z'$ the expressions (20.3), and for $F_{x'}$, $F_{y'}$, $F_{z'}$ those given by (20.4), all into (20.2). So, we have a rather long string of terms for $\tau_{x'y'}$ and (rather surprisingly at first) it turns out that it comes right down to $xF_y - yF_x$, which we recognize to be the torque in the $xy$-plane: \begin{align} \tau_{x'y'}&=\! \begin{aligned}[t] &(x\cos\theta+y\sin\theta)(F_y\cos\theta-F_x\sin\theta)\\ &-~(y\cos\theta-x\sin\theta)(F_x\cos\theta+F_y\sin\theta) \end{aligned}\notag\\[2ex] &=\! \begin{aligned}[t] &xF_y(\cos^2\theta+\sin^2\theta)-yF_x(\sin^2\theta+\cos^2\theta)\\ &+~xF_x(-\sin\theta\cos\theta+\sin\theta\cos\theta)\\ &+~yF_y(\sin\theta\cos\theta-\sin\theta\cos\theta) \end{aligned}\notag\\[2ex] \label{Eq:I:20:5} &=xF_y - yF_x = \tau_{xy}. \end{align}
\begin{align} \tau_{x'y'}&=\! \begin{aligned}[t] &(x\cos\theta+y\sin\theta)(F_y\cos\theta-F_x\sin\theta)\\ -&(y\cos\theta-x\sin\theta)(F_x\cos\theta+F_y\sin\theta) \end{aligned}\notag\\[2ex] &=\! \begin{aligned}[t] &xF_y(\cos^2\theta+\sin^2\theta)\\ -&yF_x(\sin^2\theta+\cos^2\theta)\\ +&xF_x(-\sin\theta\cos\theta+\sin\theta\cos\theta)\\ +&yF_y(\sin\theta\cos\theta-\sin\theta\cos\theta) \end{aligned}\notag\\[2ex] \label{Eq:I:20:5} &=xF_y - yF_x = \tau_{xy}. \end{align} That result is clear, for if we only turn our axes in the plane, the twist around $z$ in that plane is no different than it was before, because it is the same plane! What will be more interesting is the expression for $\tau_{y'z'}$, because that is a new plane. We now do exactly the same thing with the $y'z'$-plane, and it comes out as follows: \begin{align} \tau_{y'z'}&=\! \begin{aligned}[t] &(y\cos\theta-x\sin\theta)F_z\\ &-~z(F_y\cos\theta-F_x\sin\theta) \end{aligned}\notag\\[1ex] &=(yF_z - zF_y)\cos\theta+(zF_x - xF_z)\sin\theta\notag\\[1ex] \label{Eq:I:20:6} &=\tau_{yz}\cos\theta+\tau_{zx}\sin\theta. \end{align} Finally, we do it for $z'x'$: \begin{align} \tau_{z'x'}&=\! \begin{aligned}[t] &z(F_x\cos\theta+F_y\sin\theta)\\ &-(x\cos\theta+y\sin\theta)F_z \end{aligned}\notag\\[1ex] &=(zF_x - xF_z)\cos\theta-(yF_z - z F_y)\sin\theta\notag\\[1ex] \label{Eq:I:20:7} &=\tau_{zx}\cos\theta-\tau_{yz}\sin\theta. \end{align} We wanted to get a rule for finding torques in new axes in terms of torques in old axes, and now we have the rule. How can we ever remember that rule? If we look carefully at (20.5), (20.6), and (20.7), we see that there is a close relationship between these equations and the equations for $x$, $y$, and $z$. If, somehow, we could call $\tau_{xy}$ the $z$-component of something, let us call it the $z$-component of $\FLPtau$, then it would be all right; we would understand (20.5) as a vector transformation, since the $z$-component would be unchanged, as it should be. Likewise, if we associate with the $yz$-plane the $x$-component of our newly invented vector, and with the $zx$-plane, the $y$-component, then these transformation expressions would read \begin{equation} \begin{alignedat}{4} &\tau_{z'}~&&=\tau_z&&,\\[.5ex] &\tau_{x'}~&&=\tau_x&&\cos\theta+\tau_y&&\sin\theta,\\[.5ex] &\tau_{y'}~&&=\tau_y&&\cos\theta-\tau_x&&\sin\theta, \end{alignedat} \label{Eq:I:20:8} \end{equation} which is just the rule for vectors! Therefore we have proved that we may identify the combination of $xF_y - yF_x$ with what we ordinarily call the $z$-component of a certain artificially invented vector. Although a torque is a twist on a plane, and it has no a priori vector character, mathematically it does behave like a vector. This vector is at right angles to the plane of the twist, and its length is proportional to the strength of the twist. The three components of such a quantity will transform like a real vector. So we represent torques by vectors; with each plane on which the torque is supposed to be acting, we associate a line at right angles, by a rule. But “at right angles” leaves the sign unspecified. To get the sign right, we must adopt a rule which will tell us that if the torque were in a certain sense on the $xy$-plane, then the axis that we want to associate with it is in the “up” $z$-direction. That is, somebody has to define “right” and “left” for us. Supposing that the coordinate system is $x$, $y$, $z$ in a right-hand system, then the rule will be the following: if we think of the twist as if we were turning a screw having a right-hand thread, then the direction of the vector that we will associate with that twist is in the direction that the screw would advance. Why is torque a vector? It is a miracle of good luck that we can associate a single axis with a plane, and therefore that we can associate a vector with the torque; it is a special property of three-dimensional space. In two dimensions, the torque is an ordinary scalar, and there need be no direction associated with it. In three dimensions, it is a vector. If we had four dimensions, we would be in great difficulty, because (if we had time, for example, as the fourth dimension) we would not only have planes like $xy$, $yz$, and $zx$, we would also have $tx$-, $ty$-, and $tz$-planes. There would be six of them, and one cannot represent six quantities as one vector in four dimensions. We will be living in three dimensions for a long time, so it is well to notice that the foregoing mathematical treatment did not depend upon the fact that $x$ was position and $F$ was force; it only depended on the transformation laws for vectors. Therefore if, instead of $x$, we used the $x$-component of some other vector, it is not going to make any difference. In other words, if we were to calculate $a_xb_y - a_yb_x$, where $\FLPa$ and $\FLPb$ are vectors, and call it the $z$-component of some new quantity $c$, then these new quantities form a vector $\FLPc$. We need a mathematical notation for the relationship of the new vector, with its three components, to the vectors $\FLPa$ and $\FLPb$. The notation that has been devised for this is $\FLPc = \FLPa\times\FLPb$. We have then, in addition to the ordinary scalar product in the theory of vector analysis, a new kind of product, called the vector product. Thus, if $\FLPc = \FLPa\times\FLPb$, this is the same as writing \begin{equation} \begin{alignedat}{6} &c_x~&&=a_y&&b_z&&-a_z&&b_y&&,\\[.5ex] &c_y~&&=a_z&&b_x&&-a_x&&b_z&&,\\[.5ex] &c_z~&&=a_x&&b_y&&-a_y&&b_x&&. \end{alignedat} \label{Eq:I:20:9} \end{equation} If we reverse the order of $\FLPa$ and $\FLPb$, calling $\FLPa$, $\FLPb$ and $\FLPb$, $\FLPa$, we would have the sign of $\FLPc$ reversed, because $c_z$ would be $b_xa_y - b_ya_x$. Therefore the cross product is unlike ordinary multiplication, where $ab = ba$; for the cross product, $\FLPb\times\FLPa = -\FLPa\times\FLPb$. From this, we can prove at once that if $\FLPa = \FLPb$, the cross product is zero. Thus, $\FLPa\times\FLPa = 0$. The cross product is very important for representing the features of rotation, and it is important that we understand the geometrical relationship of the three vectors $\FLPa$, $\FLPb$, and $\FLPc$. Of course the relationship in components is given in Eq. (20.9) and from that one can determine what the relationship is in geometry. The answer is, first, that the vector $\FLPc$ is perpendicular to both $\FLPa$ and $\FLPb$. (Try to calculate $\FLPc\cdot\FLPa$, and see if it does not reduce to zero.) Second, the magnitude of $\FLPc$ turns out to be the magnitude of $\FLPa$ times the magnitude of $\FLPb$ times the sine of the angle between the two. In which direction does $\FLPc$ point? Imagine that we turn $\FLPa$ into $\FLPb$ through an angle less than $180^\circ$; a screw with a right-hand thread turning in this way will advance in the direction of $\FLPc$. The fact that we say a right-hand screw instead of a left-hand screw is a convention, and is a perpetual reminder that if $\FLPa$ and $\FLPb$ are “honest” vectors in the ordinary sense, the new kind of “vector” which we have created by $\FLPa\times\FLPb$ is artificial, or slightly different in its character from $\FLPa$ and $\FLPb$, because it was made up with a special rule. If $\FLPa$ and $\FLPb$ are called ordinary vectors, we have a special name for them, we call them polar vectors. Examples of such vectors are the coordinate $\FLPr$, force $\FLPF$, momentum $\FLPp$, velocity $\FLPv$, electric field $\FLPE$, etc.; these are ordinary polar vectors. Vectors which involve just one cross product in their definition are called axial vectors or pseudo vectors. Examples of pseudo vectors are, of course, torque $\FLPtau$ and the angular momentum $\FLPL$. It also turns out that the angular velocity $\FLPomega$ is a pseudo vector, as is the magnetic field $\FLPB$. In order to complete the mathematical properties of vectors, we should know all the rules for their multiplication, using dot and cross products. In our applications at the moment, we will need very little of this, but for the sake of completeness we shall write down all of the rules for vector multiplication so that we can use the results later. These are \begin{equation} \begin{alignedat}{2} &(\text{a})&\quad \FLPa\times(\FLPb+\FLPc)&\;=\FLPa\times\FLPb+\FLPa\times\FLPc,\\[.5ex] &(\text{b})&\quad (\alpha\FLPa)\times\FLPb&\;=\alpha(\FLPa\times\FLPb),\\[.5ex] &(\text{c})&\quad \FLPa\cdot(\FLPb\times\FLPc)&\;=(\FLPa\times\FLPb)\cdot\FLPc,\\[.5ex] &(\text{d})&\quad \FLPa\times(\FLPb\times\FLPc)&\;=\FLPb(\FLPa\cdot\FLPc)- \FLPc(\FLPa\cdot\FLPb),\\[.5ex] &(\text{e})&\;\quad \FLPa\times\FLPa&\;=0,\\[.5ex] &(\text{f})&\;\quad \FLPa\cdot(\FLPa\times\FLPb)&\;=0. \end{alignedat} \label{Eq:I:20:10} \end{equation} |
|
1 | 20 | Rotation in space | 2 | The rotation equations using cross products | Now let us ask whether any equations in physics can be written using the cross product. The answer, of course, is that a great many equations can be so written. For instance, we see immediately that the torque is equal to the position vector cross the force: \begin{equation} \label{Eq:I:20:11} \FLPtau=\FLPr\times\FLPF. \end{equation} This is a vector summary of the three equations $\tau_x = yF_z - zF_y$, etc. By the same token, the angular momentum vector, if there is only one particle present, is the distance from the origin multiplied by the vector momentum: \begin{equation} \label{Eq:I:20:12} \FLPL=\FLPr\times\FLPp. \end{equation} For three-dimensional space rotation, the dynamical law analogous to the law $\FLPF = d\FLPp/dt$ of Newton, is that the torque vector is the rate of change with time of the angular momentum vector: \begin{equation} \label{Eq:I:20:13} \FLPtau=d\FLPL/dt. \end{equation} If we sum (20.13) over many particles, the external torque on a system is the rate of change of the total angular momentum: \begin{equation} \label{Eq:I:20:14} \FLPtau_{\text{ext}}=d\FLPL_{\text{tot}}/dt. \end{equation} Another theorem: If the total external torque is zero, then the total vector angular momentum of the system is a constant. This is called the law of conservation of angular momentum. If there is no torque on a given system, its angular momentum cannot change. What about angular velocity? Is it a vector? We have already discussed turning a solid object about a fixed axis, but for a moment suppose that we are turning it simultaneously about two axes. It might be turning about an axis inside a box, while the box is turning about some other axis. The net result of such combined motions is that the object simply turns about some new axis! The wonderful thing about this new axis is that it can be figured out this way. If the rate of turning in the $xy$-plane is written as a vector in the $z$-direction whose length is equal to the rate of rotation in the plane, and if another vector is drawn in the $y$-direction, say, which is the rate of rotation in the $zx$-plane, then if we add these together as a vector, the magnitude of the result tells us how fast the object is turning, and the direction tells us in what plane, by the rule of the parallelogram. That is to say, simply, angular velocity is a vector, where we draw the magnitudes of the rotations in the three planes as projections at right angles to those planes.1 As a simple application of the use of the angular velocity vector, we may evaluate the power being expended by the torque acting on a rigid body. The power, of course, is the rate of change of work with time; in three dimensions, the power turns out to be $P = \FLPtau\cdot\FLPomega$. All the formulas that we wrote for plane rotation can be generalized to three dimensions. For example, if a rigid body is turning about a certain axis with angular velocity $\FLPomega$, we might ask, “What is the velocity of a point at a certain radial position $\FLPr$?” We shall leave it as a problem for the student to show that the velocity of a particle in a rigid body is given by $\FLPv= \FLPomega\times\FLPr$, where $\FLPomega$ is the angular velocity and $\FLPr$ is the position. Also, as another example of cross products, we had a formula for Coriolis force, which can also be written using cross products: $\FLPF_c = 2m\FLPv\times\FLPomega$. That is, if a particle is moving with velocity $\FLPv$ in a coordinate system which is, in fact, rotating with angular velocity $\FLPomega$, and we want to think in terms of the rotating coordinate system, then we have to add the pseudo force $\FLPF_c$. |
|
1 | 20 | Rotation in space | 3 | The gyroscope | Let us now return to the law of conservation of angular momentum. This law may be demonstrated with a rapidly spinning wheel, or gyroscope, as follows (see Fig. 20–1). If we sit on a swivel chair and hold the spinning wheel with its axis horizontal, the wheel has an angular momentum about the horizontal axis. Angular momentum around a vertical axis cannot change because of the (frictionless) pivot of the chair, so if we turn the axis of the wheel into the vertical, then the wheel would have angular momentum about the vertical axis, because it is now spinning about this axis. But the system (wheel, ourself, and chair) cannot have a vertical component, so we and the chair have to turn in the direction opposite to the spin of the wheel, to balance it. First let us analyze in more detail the thing we have just described. What is surprising, and what we must understand, is the origin of the forces which turn us and the chair around as we turn the axis of the gyroscope toward the vertical. Figure 20–2 shows the wheel spinning rapidly about the $y$-axis. Therefore its angular velocity is about that axis and, it turns out, its angular momentum is likewise in that direction. Now suppose that we wish to rotate the wheel about the $x$-axis at a small angular velocity $\Omega$; what forces are required? After a short time $\Delta t$, the axis has turned to a new position, tilted at an angle $\Delta\theta$ with the horizontal. Since the major part of the angular momentum is due to the spin on the axis (very little is contributed by the slow turning), we see that the angular momentum vector has changed. What is the change in angular momentum? The angular momentum does not change in magnitude, but it does change in direction by an amount $\Delta\theta$. The magnitude of the vector $\Delta\FLPL$ is thus $\Delta L = L_0\,\Delta\theta$, so that the torque, which is the time rate of change of the angular momentum, is $\tau= \Delta L/\Delta t = L_0\,\Delta\theta/\Delta t = L_0\Omega$. Taking the directions of the various quantities into account, we see that \begin{equation} \label{Eq:I:20:15} \FLPtau=\FLPOmega\times\FLPL_0. \end{equation} Thus, if $\FLPOmega$ and $\FLPL_0$ are both horizontal, as shown in the figure, $\FLPtau$ is vertical. To produce such a torque, horizontal forces $\FLPF$ and $-\FLPF$ must be applied at the ends of the axle. How are these forces applied? By our hands, as we try to rotate the axis of the wheel into the vertical direction. But Newton’s Third Law demands that equal and opposite forces (and equal and opposite torques) act on us. This causes us to rotate in the opposite sense about the vertical axis $z$. This result can be generalized for a rapidly spinning top. In the familiar case of a spinning top, gravity acting on its center of mass furnishes a torque about the point of contact with the floor (see Fig. 20–3). This torque is in the horizontal direction, and causes the top to precess with its axis moving in a circular cone about the vertical. If $\FLPOmega$ is the (vertical) angular velocity of precession, we again find that \begin{equation*} \FLPtau=d\FLPL/dt=\FLPOmega\times\FLPL_0. \end{equation*} Thus, when we apply a torque to a rapidly spinning top, the direction of the precessional motion is in the direction of the torque, or at right angles to the forces producing the torque. We may now claim to understand the precession of gyroscopes, and indeed we do, mathematically. However, this is a mathematical thing which, in a sense, appears as a “miracle.” It will turn out, as we go to more and more advanced physics, that many simple things can be deduced mathematically more rapidly than they can be really understood in a fundamental or simple sense. This is a strange characteristic, and as we get into more and more advanced work there are circumstances in which mathematics will produce results which no one has really been able to understand in any direct fashion. An example is the Dirac equation, which appears in a very simple and beautiful form, but whose consequences are hard to understand. In our particular case, the precession of a top looks like some kind of a miracle involving right angles and circles, and twists and right-hand screws. What we should try to do is to understand it in a more physical way. How can we explain the torque in terms of the real forces and the accelerations? We note that when the wheel is precessing, the particles that are going around the wheel are not really moving in a plane because the wheel is precessing (see Fig. 20–4). As we explained previously (Fig. 19–4), the particles which are crossing through the precession axis are moving in curved paths, and this requires application of a lateral force. This is supplied by our pushing on the axle, which then communicates the force to the rim through the spokes. “Wait,” someone says, “what about the particles that are going back on the other side?” It does not take long to decide that there must be a force in the opposite direction on that side. The net force that we have to apply is therefore zero. The forces balance out, but one of them must be applied at one side of the wheel, and the other must be applied at the other side of the wheel. We could apply these forces directly, but because the wheel is solid we are allowed to do it by pushing on the axle, since forces can be carried up through the spokes. What we have so far proved is that if the wheel is precessing, it can balance the torque due to gravity or some other applied torque. But all we have shown is that this is a solution of an equation. That is, if the torque is given, and if we get the spinning started right, then the wheel will precess smoothly and uniformly. But we have not proved (and it is not true) that a uniform precession is the most general motion a spinning body can undergo as the result of a given torque. The general motion involves also a “wobbling” about the mean precession. This “wobbling” is called nutation. Some people like to say that when one exerts a torque on a gyroscope, it turns and it precesses, and that the torque produces the precession. It is very strange that when one suddenly lets go of a gyroscope, it does not fall under the action of gravity, but moves sidewise instead! Why is it that the downward force of the gravity, which we know and feel, makes it go sidewise? All the formulas in the world like (20.15) are not going to tell us, because (20.15) is a special equation, valid only after the gyroscope is precessing nicely. What really happens, in detail, is the following. If we were to hold the axis absolutely fixed, so that it cannot precess in any manner (but the top is spinning) then there is no torque acting, not even a torque from gravity, because it is balanced by our fingers. But if we suddenly let go, then there will instantaneously be a torque from gravity. Anyone in his right mind would think that the top would fall, and that is what it starts to do, as can be seen if the top is not spinning too fast. The gyro actually does fall, as we would expect. But as soon as it falls, it is then turning, and if this turning were to continue, a torque would be required. In the absence of a torque in this direction, the gyro begins to “fall” in the direction opposite that of the missing force. This gives the gyro a component of motion around the vertical axis, as it would have in steady precession. But the actual motion “overshoots” the steady precessional velocity, and the axis actually rises again to the level from which it started. The path followed by the end of the axle is a cycloid (the path followed by a pebble that is stuck in the tread of an automobile tire). Ordinarily, this motion is too quick for the eye to follow, and it damps out quickly because of the friction in the gimbal bearings, leaving only the steady precessional drift (Fig. 20–5). The slower the wheel spins, the more obvious the nutation is. When the motion settles down, the axis of the gyro is a little bit lower than it was at the start. Why? (These are the more complicated details, but we bring them in because we do not want the reader to get the idea that the gyroscope is an absolute miracle. It is a wonderful thing, but it is not a miracle.) If we were holding the axis absolutely horizontally, and suddenly let go, then the simple precession equation would tell us that it precesses, that it goes around in a horizontal plane. But that is impossible! Although we neglected it before, it is true that the wheel has some moment of inertia about the precession axis, and if it is moving about that axis, even slowly, it has a weak angular momentum about the axis. Where did it come from? If the pivots are perfect, there is no torque about the vertical axis. How then does it get to precess if there is no change in the angular momentum? The answer is that the cycloidal motion of the end of the axis damps down to the average, steady motion of the center of the equivalent rolling circle. That is, it settles down a little bit low. Because it is low, the spin angular momentum now has a small vertical component, which is exactly what is needed for the precession. So you see it has to go down a little, in order to go around. It has to yield a little bit to the gravity; by turning its axis down a little bit, it maintains the rotation about the vertical axis. That, then, is the way a gyroscope works. |
|
1 | 20 | Rotation in space | 4 | Angular momentum of a solid body | Before we leave the subject of rotations in three dimensions, we shall discuss, at least qualitatively, a few effects that occur in three-dimensional rotations that are not self-evident. The main effect is that, in general, the angular momentum of a rigid body is not necessarily in the same direction as the angular velocity. Consider a wheel that is fastened onto a shaft in a lopsided fashion, but with the axis through the center of gravity, to be sure (Fig. 20–6). When we spin the wheel around the axis, anybody knows that there will be shaking at the bearings because of the lopsided way we have it mounted. Qualitatively, we know that in the rotating system there is centrifugal force acting on the wheel, trying to throw its mass as far as possible from the axis. This tends to line up the plane of the wheel so that it is perpendicular to the axis. To resist this tendency, a torque is exerted by the bearings. If there is a torque exerted by the bearings, there must be a rate of change of angular momentum. How can there be a rate of change of angular momentum when we are simply turning the wheel about the axis? Suppose we break the angular velocity $\FLPomega$ into components $\FLPomega_1$ and $\FLPomega_2$ perpendicular and parallel to the plane of the wheel. What is the angular momentum? The moments of inertia about these two axes are different, so the angular momentum components, which (in these particular, special axes only) are equal to the moments of inertia times the corresponding angular velocity components, are in a different ratio than are the angular velocity components. Therefore the angular momentum vector is in a direction in space not along the axis. When we turn the object, we have to turn the angular momentum vector in space, so we must exert torques on the shaft. Although it is much too complicated to prove here, there is a very important and interesting property of the moment of inertia which is easy to describe and to use, and which is the basis of our above analysis. This property is the following: Any rigid body, even an irregular one like a potato, possesses three mutually perpendicular axes through the CM, such that the moment of inertia about one of these axes has the greatest possible value for any axis through the CM, the moment of inertia about another of the axes has the minimum possible value, and the moment of inertia about the third axis is intermediate between these two (or equal to one of them). These axes are called the principal axes of the body, and they have the important property that if the body is rotating about one of them, its angular momentum is in the same direction as the angular velocity. For a body having axes of symmetry, the principal axes are along the symmetry axes. If we take the $x$-, $y$-, and $z$-axes along the principal axes, and call the corresponding principal moments of inertia $A$, $B$, and $C$, we may easily evaluate the angular momentum and the kinetic energy of rotation of the body for any angular velocity $\FLPomega$. If we resolve $\FLPomega$ into components $\omega_x$, $\omega_y$, and $\omega_z$ along the $x$-, $y$-, $z$-axes, and use unit vectors $\FLPi$, $\FLPj$, $\FLPk$, also along $x$, $y$, $z$, we may write the angular momentum as \begin{equation} \label{Eq:I:20:16} \FLPL=A\omega_x\FLPi+B\omega_y\FLPj+C\omega_z\FLPk. \end{equation} The kinetic energy of rotation is \begin{align} \label{Eq:I:20:17} \text{KE} &=\tfrac{1}{2}(A\omega_x^2+B\omega_y^2+C\omega_z^2)\\[1ex] &=\tfrac{1}{2}\FLPL\cdot\FLPomega.\ \end{align} |
|
1 | 21 | The Harmonic Oscillator | 1 | Linear differential equations | In the study of physics, usually the course is divided into a series of subjects, such as mechanics, electricity, optics, etc., and one studies one subject after the other. For example, this course has so far dealt mostly with mechanics. But a strange thing occurs again and again: the equations which appear in different fields of physics, and even in other sciences, are often almost exactly the same, so that many phenomena have analogs in these different fields. To take the simplest example, the propagation of sound waves is in many ways analogous to the propagation of light waves. If we study acoustics in great detail we discover that much of the work is the same as it would be if we were studying optics in great detail. So the study of a phenomenon in one field may permit an extension of our knowledge in another field. It is best to realize from the first that such extensions are possible, for otherwise one might not understand the reason for spending a great deal of time and energy on what appears to be only a small part of mechanics. The harmonic oscillator, which we are about to study, has close analogs in many other fields; although we start with a mechanical example of a weight on a spring, or a pendulum with a small swing, or certain other mechanical devices, we are really studying a certain differential equation. This equation appears again and again in physics and in other sciences, and in fact it is a part of so many phenomena that its close study is well worth our while. Some of the phenomena involving this equation are the oscillations of a mass on a spring; the oscillations of charge flowing back and forth in an electrical circuit; the vibrations of a tuning fork which is generating sound waves; the analogous vibrations of the electrons in an atom, which generate light waves; the equations for the operation of a servosystem, such as a thermostat trying to adjust a temperature; complicated interactions in chemical reactions; the growth of a colony of bacteria in interaction with the food supply and the poisons the bacteria produce; foxes eating rabbits eating grass, and so on; all these phenomena follow equations which are very similar to one another, and this is the reason why we study the mechanical oscillator in such detail. The equations are called linear differential equations with constant coefficients. A linear differential equation with constant coefficients is a differential equation consisting of a sum of several terms, each term being a derivative of the dependent variable with respect to the independent variable, and multiplied by some constant. Thus \begin{equation} \label{Eq:I:21:1} a_n\,d^nx/dt^n+a_{n-1}\,d^{n-1}x/dt^{n-1}+\dotsb +a_1\,dx/dt+a_0x=f(t) \end{equation}
\begin{align} a_n\,d^nx/dt^n&+a_{n-1}\,d^{n-1}x/dt^{n-1}+\dotsb\notag\\ &+a_1\,dx/dt+a_0x=f(t) \label{Eq:I:21:1} \end{align} is called a linear differential equation of order $n$ with constant coefficients (each $a_i$ is constant). |
|
1 | 21 | The Harmonic Oscillator | 2 | The harmonic oscillator | Perhaps the simplest mechanical system whose motion follows a linear differential equation with constant coefficients is a mass on a spring: first the spring stretches to balance the gravity; once it is balanced, we then discuss the vertical displacement of the mass from its equilibrium position (Fig. 21–1). We shall call this upward displacement $x$, and we shall also suppose that the spring is perfectly linear, in which case the force pulling back when the spring is stretched is precisely proportional to the amount of stretch. That is, the force is $-kx$ (with a minus sign to remind us that it pulls back). Thus the mass times the acceleration must equal $-kx$: \begin{equation} \label{Eq:I:21:2} m\,d^2x/dt^2=-kx. \end{equation} For simplicity, suppose it happens (or we change our unit of time measurement) that the ratio $k/m = 1$. We shall first study the equation \begin{equation} \label{Eq:I:21:3} d^2x/dt^2=-x. \end{equation} Later we shall come back to Eq. (21.2) with the $k$ and $m$ explicitly present. We have already analyzed Eq. (21.3) in detail numerically; when we first introduced the subject of mechanics we solved this equation (see Eq. 9.12) to find the motion. By numerical integration we found a curve (Fig. 9–4) which showed that if $m$ was initially displaced, but at rest, it would come down and go through zero; we did not then follow it any farther, but of course we know that it just keeps going up and down—it oscillates. When we calculated the motion numerically, we found that it went through the equilibrium point at $t = 1.570$. The length of the whole cycle is four times this long, or $t_0 = 6.28$ “sec.” This was found numerically, before we knew much calculus. We assume that in the meantime the Mathematics Department has brought forth a function which, when differentiated twice, is equal to itself with a minus sign. (There are, of course, ways of getting at this function in a direct fashion, but they are more complicated than already knowing what the answer is.) The function is $x= \cos t$. If we differentiate this we find $dx/dt = -\sin t$ and $d^2x/dt^2 =$ $-\cos t=$ $-x$. The function $x = \cos t$ starts, at $t = 0$, with $x= 1$, and no initial velocity; that was the situation with which we started when we did our numerical work. Now that we know that $x = \cos t$, we can calculate a precise value for the time at which it should pass $x = 0$. The answer is $t=\pi/2$, or $1.57080$. We were wrong in the last figure because of the errors of numerical analysis, but it was very close! Now to go further with the original problem, we restore the time units to real seconds. What is the solution then? First of all, we might think that we can get the constants $k$ and $m$ in by multiplying $\cos t$ by something. So let us try the equation $x = A \cos t$; then we find $dx/dt = -A \sin t$, and $d^2x/dt^2 =$ $-A \cos t=$ $-x$. Thus we discover to our horror that we did not succeed in solving Eq. (21.2), but we got Eq. (21.3) again! That fact illustrates one of the most important properties of linear differential equations: if we multiply a solution of the equation by any constant, it is again a solution. The mathematical reason for this is clear. If $x$ is a solution, and we multiply both sides of the equation, say by $A$, we see that all derivatives are also multiplied by $A$, and therefore $Ax$ is just as good a solution of the original equation as $x$ was. The physics of it is the following. If we have a weight on a spring, and pull it down twice as far, the force is twice as much, the resulting acceleration is twice as great, the velocity it acquires in a given time is twice as great, the distance covered in a given time is twice as great; but it has to cover twice as great a distance in order to get back to the origin because it is pulled down twice as far. So it takes the same time to get back to the origin, irrespective of the initial displacement. In other words, with a linear equation, the motion has the same time pattern, no matter how “strong” it is. That was the wrong thing to do—it only taught us that we can multiply the solution by anything, and it satisfies the same equation, but not a different equation. After a little cut and try to get to an equation with a different constant multiplying $x$, we find that we must alter the scale of time. In other words, Eq. (21.2) has a solution of the form \begin{equation} \label{Eq:I:21:4} x=\cos\omega_0t. \end{equation} (It is important to realize that in the present case, $\omega_0$ is not an angular velocity of a spinning body, but we run out of letters if we are not allowed to use the same letter for more than one thing.) The reason we put a subscript “$0$” on $\omega$ is that we are going to have more omegas before long; let us remember that $\omega_0$ refers to the natural motion of this oscillator. Now we try Eq. (21.4) and this time we are more successful, because $dx/dt =-\omega_0\sin \omega_0t$ and $d^2x/dt^2 =$ $-\omega_0^2\cos\omega_0t =$ $-\omega_0^2x$. So at last we have solved the equation that we really wanted to solve. The equation $d^2x/dt^2 =-\omega_0^2x$ is the same as Eq. (21.2) if $\omega_0^2 = k/m$. The next thing we must investigate is the physical significance of $\omega_0$. We know that the cosine function repeats itself when the angle it refers to is $2\pi$. So $x= \cos\omega_0t$ will repeat its motion, it will go through a complete cycle, when the “angle” changes by $2\pi$. The quantity $\omega_0t$ is often called the phase of the motion. In order to change $\omega_0t$ by $2\pi$, the time must change by an amount $t_0$, called the period of one complete oscillation; of course $t_0$ must be such that $\omega_0t_0 = 2\pi$. That is, $\omega_0t_0 $ must account for one cycle of the angle, and then everything will repeat itself—if we increase $t$ by $t_0$, we add $2\pi$ to the phase. Thus \begin{equation} \label{Eq:I:21:5} t_0=2\pi/\omega_0=2\pi\sqrt{m/k}. \end{equation} Thus if we had a heavier mass, it would take longer to oscillate back and forth on a spring. That is because it has more inertia, and so, while the forces are the same, it takes longer to get the mass moving. Or, if the spring is stronger, it will move more quickly, and that is right: the period is less if the spring is stronger. Note that the period of oscillation of a mass on a spring does not depend in any way on how it has been started, how far down we pull it. The period is determined, but the amplitude of the oscillation is not determined by the equation of motion (21.2). The amplitude is determined, in fact, by how we let go of it, by what we call the initial conditions or starting conditions. Actually, we have not quite found the most general possible solution of Eq. (21.2). There are other solutions. It should be clear why: because all of the cases covered by $x = a \cos \omega_0t$ start with an initial displacement and no initial velocity. But it is possible, for instance, for the mass to start at $x= 0$, and we may then give it an impulsive kick, so that it has some speed at $t = 0$. Such a motion is not represented by a cosine—it is represented by a sine. To put it another way, if $x = \cos \omega_0t$ is a solution, then is it not obvious that if we were to happen to walk into the room at some time (which we would call “$t = 0$”) and saw the mass as it was passing $x= 0$, it would keep on going just the same? Therefore, $x = \cos \omega_0t$ cannot be the most general solution; it must be possible to shift the beginning of time, so to speak. As an example, we could write the solution this way: $x= a \cos \omega_0(t - t_1)$, where $t_1$ is some constant. This also corresponds to shifting the origin of time to some new instant. Furthermore, we may expand \begin{equation*} \cos\,(\omega_0t+\Delta)=\cos\omega_0t\cos\Delta- \sin\omega_0t\sin\Delta, \end{equation*} and write \begin{equation*} x = A \cos \omega_0t + B \sin \omega_0t, \end{equation*} where $A = a \cos\Delta$ and $B = -a \sin\Delta$. Any one of these forms is a possible way to write the complete, general solution of (21.2): that is, every solution of the differential equation $d^2x/dt^2 =-\omega_0^2x$ that exists in the world can be written as \begin{alignat}{2} &(\text{a})\quad x&=\;&a\cos\omega_0(t-t_1),\notag\\[1ex] \kern{-2em}\text{or}\notag\\ % ebook remove % ebook insert: \makebox[0em]{\kern{-3.33em}\text{or}}\notag\\ \label{Eq:I:21:6} &(\text{b})\quad x&=\;&a\cos\,(\omega_0t+\Delta)\\[1ex] \kern{-2em}\text{or}\notag\\ % ebook remove % ebook insert: \makebox[0em]{\kern{-3.33em}\text{or}}\notag\\ &(\text{c})\quad x&=\;&A\cos\omega_0t+B\sin\omega_0t.\notag \end{alignat} Some of the quantities in (21.6) have names: $\omega_0$ is called the angular frequency; it is the number of radians by which the phase changes in a second. That is determined by the differential equation. The other constants are not determined by the equation, but by how the motion is started. Of these constants, $a$ measures the maximum displacement attained by the mass, and is called the amplitude of oscillation. The constant $\Delta$ is sometimes called the phase of the oscillation, but that is a confusion, because other people call $\omega_0t + \Delta$ the phase, and say the phase changes with time. We might say that $\Delta$ is a phase shift from some defined zero. Let us put it differently. Different $\Delta$’s correspond to motions in different phases. That is true, but whether we want to call $\Delta$ the phase, or not, is another question. |
|
1 | 21 | The Harmonic Oscillator | 3 | Harmonic motion and circular motion | The fact that cosines are involved in the solution of Eq. (21.2) suggests that there might be some relationship to circles. This is artificial, of course, because there is no circle actually involved in the linear motion—it just goes up and down. We may point out that we have, in fact, already solved that differential equation when we were studying the mechanics of circular motion. If a particle moves in a circle with a constant speed $v$, the radius vector from the center of the circle to the particle turns through an angle whose size is proportional to the time. If we call this angle $\theta= vt/R$ (Fig. 21–2) then $d\theta/dt =$ $\omega_0 =$ $v/R$. We know that there is an acceleration $a =$ $v^2/R =$ $\omega_0^2R$ toward the center. Now we also know that the position $x$, at a given moment, is the radius of the circle times $\cos\theta$, and that $y$ is the radius times $\sin\theta$: \begin{equation*} x=R\cos\theta,\quad y=R\sin\theta. \end{equation*} Now what about the acceleration? What is the $x$-component of acceleration, $d^2x/dt^2$? We have already worked that out geometrically; it is the magnitude of the acceleration times the cosine of the projection angle, with a minus sign because it is toward the center. \begin{equation} \label{Eq:I:21:7} a_x=-a\cos\theta=-\omega_0^2R\cos\theta=-\omega_0^2x. \end{equation} In other words, when a particle is moving in a circle, the horizontal component of its motion has an acceleration which is proportional to the horizontal displacement from the center. Of course we also have the solution for motion in a circle: $x = R \cos \omega_0t$. Equation (21.7) does not depend upon the radius of the circle, so for a circle of any radius, one finds the same equation for a given $\omega_0$. Thus, for several reasons, we expect that the displacement of a mass on a spring will turn out to be proportional to $\cos\omega_0t$, and will, in fact, be exactly the same motion as we would see if we looked at the $x$-component of the position of an object rotating in a circle with angular velocity $\omega_0$. As a check on this, one can devise an experiment to show that the up-and-down motion of a mass on a spring is the same as that of a point going around in a circle. In Fig. 21–3 an arc light projected on a screen casts shadows of a crank pin on a shaft and of a vertically oscillating mass, side by side. If we let go of the mass at the right time from the right place, and if the shaft speed is carefully adjusted so that the frequencies match, each should follow the other exactly. One can also check the numerical solution we obtained earlier with the cosine function, and see whether that agrees very well. Here we may point out that because uniform motion in a circle is so closely related mathematically to oscillatory up-and-down motion, we can analyze oscillatory motion in a simpler way if we imagine it to be a projection of something going in a circle. In other words, although the distance $y$ means nothing in the oscillator problem, we may still artificially supplement Eq. (21.2) with another equation using $y$, and put the two together. If we do this, we will be able to analyze our one-dimensional oscillator with circular motions, which is a lot easier than having to solve a differential equation. The trick in doing this is to use complex numbers, a procedure we shall introduce in the next chapter. |