book_volume
stringclasses
3 values
book_title
stringclasses
1 value
chapter_number
stringlengths
1
2
chapter_title
stringlengths
5
79
section_number
stringclasses
9 values
section_title
stringlengths
4
93
section_text
stringlengths
868
48.5k
1
21
The Harmonic Oscillator
4
Initial conditions
Now let us consider what determines the constants $A$ and $B$, or $a$ and $\Delta$. Of course these are determined by how we start the motion. If we start the motion with just a small displacement, that is one type of oscillation; if we start with an initial displacement and then push up when we let go, we get still a different motion. The constants $A$ and $B$, or $a$ and $\Delta$, or any other way of putting it, are determined, of course, by the way the motion started, not by any other features of the situation. These are called the initial conditions. We would like to connect the initial conditions with the constants. Although this can be done using any one of the forms (21.6), it turns out to be easiest if we use Eq. (21.6c). Suppose that at $t= 0$ we have started with an initial displacement $x_0$ and a certain velocity $v_0$. This is the most general way we can start the motion. (We cannot specify the acceleration with which it started, true, because that is determined by the spring, once we specify $x_0$.) Now let us calculate $A$ and $B$. We start with the equation for $x$, \begin{equation*} x = A \cos \omega_0t + B \sin \omega_0t. \end{equation*} Since we shall later need the velocity also, we differentiate $x$ and obtain \begin{equation*} v=-\omega_0 A \sin \omega_0t + \omega_0 B \cos \omega_0t. \end{equation*} These expressions are valid for all $t$, but we have special knowledge about $x$ and $v$ at $t= 0$. So if we put $t= 0$ into these equations, on the left we get $x_0$ and $v_0$, because that is what $x$ and $v$ are at $t = 0$; also, we know that the cosine of zero is unity, and the sine of zero is zero. Therefore we get \begin{equation*} x_0=A\cdot 1 + B\cdot 0=A \end{equation*} and \begin{equation*} v_0=-\omega_0 A\cdot 0 + \omega_0 B\cdot 1 = \omega_0 B. \end{equation*} So for this particular case we find that \begin{equation*} A=x_0,\quad B=v_0/\omega_0. \end{equation*} From these values of $A$ and $B$, we can get $a$ and $\Delta$ if we wish. That is the end of our solution, but there is one physically interesting thing to check, and that is the conservation of energy. Since there are no frictional losses, energy ought to be conserved. Let us use the formula \begin{equation*} x =a\cos\,(\omega_0t+\Delta); \end{equation*} then \begin{equation*} v =-\omega_0 a\sin\,(\omega_0t+\Delta). \end{equation*} Now let us find out what the kinetic energy $T$ is, and what the potential energy $U$ is. The potential energy at any moment is $\tfrac{1}{2}kx^2$, where $x$ is the displacement and $k$ is the constant of the spring. If we substitute for $x$, using our expression above, we get \begin{equation*} U=\tfrac{1}{2}kx^2=\tfrac{1}{2}ka^2\cos^2\,(\omega_0t+\Delta). \end{equation*} Of course the potential energy is not constant; the potential never becomes negative, naturally—there is always some energy in the spring, but the amount of energy fluctuates with $x$. The kinetic energy, on the other hand, is $\tfrac{1}{2}mv^2$, and by substituting for $v$ we get \begin{equation*} T=\tfrac{1}{2}mv^2=\tfrac{1}{2}m\omega_0^2a^2\sin^2\,(\omega_0t+\Delta). \end{equation*} Now the kinetic energy is zero when $x$ is at the maximum, because then there is no velocity; on the other hand, it is maximal when $x$ is passing through zero, because then it is moving fastest. This variation of the kinetic energy is just the opposite of that of the potential energy. But the total energy ought to be a constant. If we note that $k = m\omega_0^2$, we see that \begin{equation*} T+U=\tfrac{1}{2}m\omega_0^2a^2[\cos^2\,(\omega_0t+\Delta)+ \sin^2\,(\omega_0t+\Delta)]= \tfrac{1}{2}m\omega_0^2a^2. \end{equation*} \begin{align*} T+U &=\tfrac{1}{2}m\omega_0^2a^2[\cos^2\,(\omega_0t+\Delta)+ \sin^2\,(\omega_0t+\Delta)]\\[1.5ex] &= \tfrac{1}{2}m\omega_0^2a^2. \end{align*} The energy is dependent on the square of the amplitude; if we have twice the amplitude, we get an oscillation which has four times the energy. The average potential energy is half the maximum and, therefore, half the total, and the average kinetic energy is likewise half the total energy.
1
21
The Harmonic Oscillator
5
Forced oscillations
Next we shall discuss the forced harmonic oscillator, i.e., one in which there is an external driving force acting. The equation then is the following: \begin{equation} \label{Eq:I:21:8} m\,d^2x/dt^2=-kx+F(t). \end{equation} We would like to find out what happens in these circumstances. The external driving force can have various kinds of functional dependence on the time; the first one that we shall analyze is very simple—we shall suppose that the force is oscillating: \begin{equation} \label{Eq:I:21:9} F(t)=F_0\cos\omega t. \end{equation} Notice, however, that this $\omega$ is not necessarily $\omega_0$: we have $\omega$ under our control; the forcing may be done at different frequencies. So we try to solve Eq. (21.8) with the special force (21.9). What is the solution of (21.8)? One special solution, (we shall discuss the more general cases later) is \begin{equation} \label{Eq:I:21:10} x=C\cos\omega t, \end{equation} where the constant is to be determined. In other words, we might suppose that if we kept pushing back and forth, the mass would follow back and forth in step with the force. We can try it anyway. So we put (21.10) and (21.9) into (21.8), and get \begin{equation} \label{Eq:I:21:11} -m\omega^2C\cos\omega t=-m\omega_0^2C\cos\omega t+F_0\cos\omega t. \end{equation} We have also put in $k = m\omega_0^2$, so that we will understand the equation better at the end. Now because the cosine appears everywhere, we can divide it out, and that shows that (21.10) is, in fact, a solution, provided we pick $C$ just right. The answer is that $C$ must be \begin{equation} \label{Eq:I:21:12} C=F_0/m(\omega_0^2-\omega^2). \end{equation} That is, $m$ oscillates at the same frequency as the force, but with an amplitude which depends on the frequency of the force, and also upon the frequency of the natural motion of the oscillator. It means, first, that if $\omega$ is very small compared with $\omega_0$, then the displacement and the force are in the same direction. On the other hand, if we shake it back and forth very fast, then (21.12) tells us that $C$ is negative if $\omega$ is above the natural frequency $\omega_0$ of the harmonic oscillator. (We will call $\omega_0$ the natural frequency of the harmonic oscillator, and $\omega$ the applied frequency.) At very high frequency the denominator may become very large, and there is then not much amplitude. Of course the solution we have found is the solution only if things are started just right, for otherwise there is a part which usually dies out after a while. This other part is called the transient response to $F(t)$, while (21.10) and (21.12) are called the steady-state response. According to our formula (21.12), a very remarkable thing should also occur: if $\omega$ is almost exactly the same as $\omega_0$, then $C$ should approach infinity. So if we adjust the frequency of the force to be “in time” with the natural frequency, then we should get an enormous displacement. This is well known to anybody who has pushed a child on a swing. It does not work very well to close our eyes and push at a certain speed at random. If we happen to get the right timing, then the swing goes very high, but if we have the wrong timing, then sometimes we may be pushing when we should be pulling, and so on, and it does not work. If we make $\omega$ exactly equal to $\omega_0$, we find that it should oscillate at an infinite amplitude, which is, of course, impossible. The reason it does not is that something goes wrong with the equation, there are some other frictional terms, and other forces, which are not in (21.8) but which occur in the real world. So the amplitude does not reach infinity for some reason; it may be that the spring breaks!
1
22
Algebra
1
Addition and multiplication
In our study of oscillating systems we shall have occasion to use one of the most remarkable, almost astounding, formulas in all of mathematics. From the physicist’s point of view we could bring forth this formula in two minutes or so, and be done with it. But science is as much for intellectual enjoyment as for practical utility, so instead of just spending a few minutes on this amazing jewel, we shall surround the jewel by its proper setting in the grand design of that branch of mathematics which is called elementary algebra. Now you may ask, “What is mathematics doing in a physics lecture?” We have several possible excuses: first, of course, mathematics is an important tool, but that would only excuse us for giving the formula in two minutes. On the other hand, in theoretical physics we discover that all our laws can be written in mathematical form; and that this has a certain simplicity and beauty about it. So, ultimately, in order to understand nature it may be necessary to have a deeper understanding of mathematical relationships. But the real reason is that the subject is enjoyable, and although we humans cut nature up in different ways, and we have different courses in different departments, such compartmentalization is really artificial, and we should take our intellectual pleasures where we find them. Another reason for looking more carefully at algebra now, even though most of us studied algebra in high school, is that that was the first time we studied it; all the equations were unfamiliar, and it was hard work, just as physics is now. Every so often it is a great pleasure to look back to see what territory has been covered, and what the great map or plan of the whole thing is. Perhaps some day somebody in the Mathematics Department will present a lecture on mechanics in such a way as to show what it was we were trying to learn in the physics course! The subject of algebra will not be developed from the point of view of a mathematician, exactly, because the mathematicians are mainly interested in how various mathematical facts are demonstrated, and how many assumptions are absolutely required, and what is not required. They are not so interested in the result of what they prove. For example, we may find the Pythagorean theorem quite interesting, that the sum of the squares of the sides of a right triangle is equal to the square of the hypotenuse; that is an interesting fact, a curiously simple thing, which may be appreciated without discussing the question of how to prove it, or what axioms are required. So, in the same spirit, we shall describe qualitatively, if we may put it that way, the system of elementary algebra. We say elementary algebra because there is a branch of mathematics called modern algebra in which some of the rules such as $ab = ba$, are abandoned, and it is still called algebra, but we shall not discuss that. To discuss this subject we start in the middle. We suppose that we already know what integers are, what zero is, and what it means to increase a number by one unit. You may say, “That is not in the middle!” But it is the middle from a mathematical standpoint, because we could go even further back and describe the theory of sets in order to derive some of these properties of integers. But we are not going in that direction, the direction of mathematical philosophy and mathematical logic, but rather in the other direction, from the assumption that we know what integers are and we know how to count. If we start with a certain number $a$, an integer, and we count successively one unit $b$ times, the number we arrive at we call $a+b$, and that defines addition of integers. Once we have defined addition, then we can consider this: if we start with nothing and add $a$ to it, $b$ times in succession, we call the result multiplication of integers; we call it $b$ times $a$. Now we can also have a succession of multiplications: if we start with $1$ and multiply by $a$, $b$ times in succession, we call that raising to a power: $a^b$. Now as a consequence of these definitions it can be easily shown that all of the following relationships are true: \begin{equation} \begin{alignedat}{4} &(\text{a})&\quad &a+b=b+a&\quad\quad &(\text{b})&\quad &a+(b+c)=(a+b)+c\\ &(\text{c})&\quad &ab=ba&\quad\quad &(\text{d})&\quad &a(b+c)=ab+ac\\ &(\text{e})&\quad &(ab)c=a(bc)&\quad\quad &(\text{f})&\quad &(ab)^c=a^cb^c\\ &(\text{g})&\quad &a^ba^c=a^{(b+c)}&\quad\quad &(\text{h})&\quad &(a^b)^c=a^{(bc)}\\ &(\text{i})&\quad &a+0=a&\quad\quad &(\text{j})&\quad &a\cdot 1=a\\ &(\text{k})&\quad &a^1=a \end{alignedat} \label{Eq:I:22:1} \end{equation} \begin{equation} \begin{alignedat}{4} &(\text{a})&\quad&a+b=b+a\\ &(\text{b})&\quad&a+(b+c)=(a+b)+c\\ &(\text{c})&\quad&ab=ba\\ &(\text{d})&\quad&a(b+c)=ab+ac\\ &(\text{e})&\quad&(ab)c=a(bc)\\ &(\text{f})&\quad&(ab)^c=a^cb^c\\ &(\text{g})&\quad&a^ba^c=a^{(b+c)}\\ &(\text{h})&\quad&(a^b)^c=a^{(bc)}\\ &(\text{i})&\quad&a+0=a\\ &(\text{j})&\quad&a\cdot 1=a\\ &(\text{k})&\quad&a^1=a \end{alignedat} \label{Eq:I:22:1} \end{equation} These results are well known and we shall not belabor the point, we merely list them. Of course, $1$ and $0$ have special properties; for example, $a + 0$ is $a$, $a$ times $1= a$, and $a$ to the first power is $a$. In this discussion we must also assume a few other properties like continuity and ordering, which are very hard to define; we will let the rigorous theory do it. Furthermore, it is definitely true that we have written down too many “rules”; some of them may be deducible from the others, but we shall not worry about such matters.
1
22
Algebra
2
The inverse operations
In addition to the direct operations of addition, multiplication, and raising to a power, we have also the inverse operations, which are defined as follows. Let us assume that $a$ and $c$ are given, and that we wish to find what values of $b$ satisfy such equations as $a + b = c$, $ab = c$, $b^a = c$. If $a + b= c$, $b$ is defined as $c - a$, which is called subtraction. The operation called division is also clear: if $ab = c$, then $b = c/a$ defines division—a solution of the equation $ab = c$ “backwards.” Now if we have a power $b^a = c$ and we ask ourselves, “What is $b$?,” it is called the $a$th root of $c$: $b = \sqrt[a]{c}$. For instance, if we ask ourselves the following question, “What integer, raised to the third power, equals $8$?,” then the answer is called the cube root of $8$; it is $2$. Because $b^a$ and $a^b$ are not equal, there are two inverse problems associated with powers, and the other inverse problem would be, “To what power must we raise $2$ to get $8$?” This is called taking the logarithm. If $a^b = c$, we write $b = \log_ac$. The fact that it has a cumbersome notation relative to the others does not mean that it is any less elementary, at least applied to integers, than the other processes. Although logarithms come late in an algebra class, in practice they are, of course, just as simple as roots; they are just a different kind of solution of an algebraic equation. The direct and inverse operations are summarized as follows: \begin{equation} \begin{alignedat}{5} &(\text{a})&&\quad \text{addition}&&\quad &&(\text{a}')&&\quad \text{subtraction}\\ & &&\quad a+b=c&&\quad && &&\quad b=c-a\\ &(\text{b})&&\quad \text{multiplication}&&\quad &&(\text{b}')&&\quad \text{division}\\ & &&\quad ab=c&&\quad && &&\quad b=c/a\\ &(\text{c})&&\quad \text{power}&&\quad &&(\text{c}')&&\quad \text{root}\\ & &&\quad b^a=c&&\quad && &&\quad b=\sqrt[a]{c}\\ &(\text{d})&&\quad \text{power}&&\quad &&(\text{d}')&&\quad \text{logarithm}\\ & &&\quad a^b=c&&\quad && &&\quad b=\log_ac\\ \end{alignedat} \label{Eq:I:22:2} \end{equation} Now here is the idea. These relationships, or rules, are correct for integers, since they follow from the definitions of addition, multiplication, and raising to a power. We are going to discuss whether or not we can broaden the class of objects which $a$, $b$, and $c$ represent so that they will obey these same rules, although the processes for $a + b$, and so on, will not be definable in terms of the direct action of adding $1$, for instance, or successive multiplications by integers.
1
22
Algebra
3
Abstraction and generalization
When we try to solve simple algebraic equations using all these definitions, we soon discover some insoluble problems, such as the following. Suppose that we try to solve the equation $b = 3 - 5$. That means, according to our definition of subtraction, that we must find a number which, when added to $5$, gives $3$. And of course there is no such number, because we consider only positive integers; this is an insoluble problem. However, the plan, the great idea, is this: abstraction and generalization. From the whole structure of algebra, rules plus integers, we abstract the original definitions of addition and multiplication, but we leave the rules (22.1) and (22.2), and assume these to be true in general on a wider class of numbers, even though they are originally derived on a smaller class. Thus, rather than using integers symbolically to define the rules, we use the rules as the definition of the symbols, which then represent a more general kind of number. As an example, by working with the rules alone we can show that $3 - 5 = 0 - 2$. In fact we can show that one can make all subtractions, provided we define a whole set of new numbers: $0 - 1$, $0 - 2$, $0 - 3$, $0 - 4$, and so on, called the negative integers. Then we may use all the other rules, like $a(b + c) = ab + ac$ and so forth, to find what the rules are for multiplying negative numbers, and we will discover, in fact, that all of the rules can be maintained with negative as well as positive integers. So we have increased the range of objects over which the rules work, but the meaning of the symbols is different. One cannot say, for instance, that $-2$ times $5$ really means to add $5$ together successively $-2$ times. That means nothing. But nevertheless everything will work out all right according to the rules. An interesting problem comes up in taking powers. Suppose that we wish to discover what $a^{(3-5)}$ means. We know only that $3 - 5$ is a solution of the problem, $(3 - 5) + 5 = 3$. Knowing that, we know that $a^{(3-5)}a^5 = a^3$. Therefore $a^{(3-5)} = a^3/a^5$, by the definition of division. With a little more work, this can be reduced to $1/a^2$. So we find that the negative powers are the reciprocals of the positive powers, but $1/a^2$ is a meaningless symbol, because if $a$ is a positive or negative integer, the square of it can be greater than $1$, and we do not yet know what we mean by $1$ divided by a number greater than $1$! Onward! The great plan is to continue the process of generalization; whenever we find another problem that we cannot solve we extend our realm of numbers. Consider division: we cannot find a number which is an integer, even a negative integer, which is equal to the result of dividing $3$ by $5$. But if we suppose that all fractional numbers also satisfy the rules, then we can talk about multiplying and adding fractions, and everything works as well as it did before. Take another example of powers: what is $a^{3/5}$? We know only that $(3/5)5 = 3$, since that was the definition of $3/5$. So we know also that $(a^{(3/5)})^5 =$ $a^{(3/5)(5)}=$ $a^3$, because this is one of the rules. Then by the definition of roots we find that $a^{(3/5)} = \sqrt[5]{a^3}$. In this way, then, we can define what we mean by putting fractions in the various symbols, by using the rules themselves to help us determine the definition—it is not arbitrary. It is a remarkable fact that all the rules still work for positive and negative integers, as well as for fractions! We go on in the process of generalization. Are there any other equations we cannot solve? Yes, there are. For example, it is impossible to solve this equation: $b =$ $2^{1/2} =$ $\sqrt{2}$. It is impossible to find a number which is rational (a fraction) whose square is equal to $2$. It is very easy for us in modern days to answer this question. We know the decimal system, and so we have no difficulty in appreciating the meaning of an unending decimal as a type of approximation to the square root of $2$. Historically, this idea presented great difficulty to the Greeks. To really define precisely what is meant here requires that we add some substance of continuity and ordering, and it is, in fact, quite the most difficult step in the processes of generalization just at this point. It was made, formally and rigorously, by Dedekind. However, without worrying about the mathematical rigor of the thing, it is quite easy to understand that what we mean is that we are going to find a whole sequence of approximate fractions, perfect fractions (because any decimal, when stopped somewhere, is of course rational), which just keeps on going, getting closer and closer to the desired result. That is good enough for what we wish to discuss, and it permits us to involve ourselves in irrational numbers, and to calculate things like the square root of $2$ to any accuracy that we desire, with enough work.
1
22
Algebra
4
Approximating irrational numbers
The next problem comes with what happens with the irrational powers. Suppose that we want to define, for instance, $10^{\sqrt{2}}$. In principle, the answer is simple enough. If we approximate the square root of $2$ to a certain number of decimal places, then the power is rational, and we can take the approximate root, using the above method, and get an approximation to $10^{\sqrt{2}}$. Then we may run it up a few more decimal places (it is again rational), take the appropriate root, this time a much higher root because there is a much bigger denominator in the fraction, and get a better approximation. Of course we are going to get some enormously high roots involved here, and the work is quite difficult. How can we cope with this problem? In the computations of square roots, cube roots, and other small roots, there is an arithmetical process available by which we can get one decimal place after another. But the amount of labor needed to calculate irrational powers and the logarithms that go with them (the inverse problem) is so great that there is no simple arithmetical process we can use. Therefore tables have been built up which permit us to calculate these powers, and these are called the tables of logarithms, or the tables of powers, depending on which way the table is set up. It is merely a question of saving time; if we must raise some number to an irrational power, we can look it up rather than having to compute it. Of course, such a computation is just a technical problem, but it is an interesting one, and of great historical value. In the first place, not only do we have the problem of solving $x=10^{\sqrt{2}}$, but we also have the problem of solving $10^x = 2$, or $x = \log_{10} 2$. This is not a problem where we have to define a new kind of number for the result, it is merely a computational problem. The answer is simply an irrational number, an unending decimal, not a new kind of a number. Let us now discuss the problem of calculating solutions of such equations. The general idea is really very simple. If we could calculate $10^1$, and $10^{4/10}$, and $10^{1/100}$, and $10^{4/1000}$ and so on, and multiply them all together, we would get $10^{1.414\dots}$ or $10^{\sqrt{2}}$, and that is the general idea on which things work. But instead of calculating $10^{1/10}$ and so on, we shall calculate $10^{1/2}$, $10^{1/4}$, and so on. Before we start, we should explain why we make so much work with $10$, instead of some other number. Of course, we realize that logarithm tables are of great practical utility, quite aside from the mathematical problem of taking roots, since with any base at all, \begin{equation} \label{Eq:I:22:3} \log_b(ac)=\log_ba+\log_bc. \end{equation} We are all familiar with the fact that one can use this fact in a practical way to multiply numbers if we have a table of logarithms. The only question is, with what base $b$ shall we compute? It makes no difference what base is used; we can use the same principle all the time, and if we are using logarithms to any particular base, we can find logarithms to any other base merely by a change in scale, a multiplying factor. If we multiply Eq. (22.3) by $61$, it is just as true, and if we had a table of logs with a base $b$, and somebody else multiplied all of our table by $61$, there would be no essential difference. Suppose that we know the logarithms of all the numbers to the base $b$. In other words, we can solve the equation $b^a = c$ for any $c$ because we have a table. The problem is to find the logarithm of the same number $c$ to some other base, let us say the base $x$. We would like to solve $x^{a'} = c$. It is easy to do, because we can always write $x = b^t$, which defines $t$, knowing $x$ and $b$. As a matter of fact, $t = \log_b x$. Then if we put that in and solve for $a'$, we see that $(b^t)^{a'} = b^{a't} = c$. In other words, $ta'$ is the logarithm of $c$ in base $b$. Thus $a' = a/t$. Thus logs to base $x$ are just $1/t$, which is a constant, times the logs to the base, $b$. Therefore any log table is equivalent to any other log table if we multiply by a constant, and the constant is $1/\log_b x$. This permits us to choose a particular base, and for convenience we take the base $10$. (The question may arise as to whether there is any natural base, any base in which things are somehow simpler, and we shall try to find an answer to that later. At the moment we shall just use the base $10$.) Now let us see how to calculate logarithms. We begin by computing successive square roots of $10$, by cut and try. The results are shown in Table 22–1. The powers of $10$ are given in the first column, and the result, $10^s$, is given in the third column. Thus $10^1 = 10$. The one-half power of $10$ we can easily work out, because that is the square root of $10$, and there is a known, simple process for taking square roots of any number.1 Using this process, we find the first square root to be $3.16228$. What good is that? It already tells us something, it tells us how to take $10^{0.5}$, so we now know at least one logarithm, if we happen to need the logarithm of $3.16228$, we know the answer is close to $0.50000$. But we must do a little bit better than that; we clearly need more information. So we take the square root again, and find $10^{1/4}$, which is $1.77828$. Now we have the logarithm of more numbers than we had before, $1.250$ is the logarithm of $17.78$ and, incidentally, if it happens that somebody asks for $10^{0.75}$, we can get it, because that is $10^{(0.5+0.25)}$; it is therefore the product of the second and third numbers. If we can get enough numbers in column $s$ to be able to make up almost any number, then by multiplying the proper things in column 3, we can get $10$ to any power; that is the plan. So we evaluate ten successive square roots of $10$, and that is the main work which is involved in the calculations. Why don’t we keep on going for more and more accuracy? Because we begin to notice something. When we raise $10$ to a very small power, we get $1$ plus a small amount. The reason for this is clear, because we are going to have to take the $1000$th power of $10^{1/1000}$ to get back to $10$, so we had better not start with too big a number; it has to be close to $1$. What we notice is that the small numbers that are added to $1$ begin to look as though we are merely dividing by $2$ each time; we see $1815$ becomes $903$, then $450$, $225$; so it is clear that, to an excellent approximation, if we take another root, we shall get $1.00112$ something, and rather than actually take all the square roots, we guess at the ultimate limit. When we take a small fraction $\Delta/1024$ as $\Delta$ approaches zero, what will the answer be? Of course it will be some number close to $1+0.0022511\,\Delta$. Not exactly $1+0.0022511\,\Delta$, however—we can get a better value by the following trick: we subtract the $1$, and then divide by the power $s$. This ought to correct all the excesses to the same value. We see that they are very closely equal. At the top of the table they are not equal, but as they come down, they get closer and closer to a constant value. What is the value? Again we look to see how the series is going, how it has changed with $s$. It changed by $211$, by $104$, by $53$, by $26$. These changes are obviously half of each other, very closely, as we go down. Therefore, if we kept going, the changes would be $13$, $7$, $3$, $2$ and $1$, more or less, or a total of $26$. Thus we have only $26$ more to go, and so we find that the true number is $2.3025$. (Actually, we shall later see that the exact number should be $2.3026$, but to keep it realistic, we shall not alter anything in the arithmetic.) From this table we can now calculate any power of $10$, by compounding the power out of $1024$ths. Let us now actually calculate a logarithm, because the process we shall use is where logarithm tables actually come from. The procedure is shown in Table 22–2, and the numerical values are shown in Table 22–1 (columns 2 and 3). Suppose we want the logarithm of $2$. That is, we want to know to what power we must raise $10$ to get $2$. Can we raise $10$ to the $1/2$ power? No; that is too big. In other words, we can see that the answer is going to be bigger than $1/4$, and less than $1/2$. Let us take the factor $10^{1/4}$ out; we divide $2$ by $1.778\dots$, and get $1.124\dots$, and so on, and now we know that we have taken away $0.250000$ from the logarithm. The number $1.124\dots$, is now the number whose logarithm we need. When we are finished we shall add back the $1/4$, or $256/1024$. Now we look in the table for the next number just below $1.124\dots$, and that is $1.074607$. We therefore divide by $1.074607$ and get $1.046598$. From that we discover that $2$ can be made up of a product of numbers that are in Table 22–1, as follows: \begin{equation*} 2 = (1.77828)(1.074607)(1.036633)(1.0090350)(1.000573). \end{equation*} \begin{gather*} 2 = (1.77828)(1.074607)(1.036633)\;\times\\ (1.0090350)(1.000573). \end{gather*} There was one factor $(1.000573)$ left over, naturally, which is beyond the range of our table. To get the logarithm of this factor, we use our result that $10^{\Delta/1024} \approx 1+ 2.3025 \Delta/1024$. We find $\Delta= 0.254$. Therefore our answer is $10$ to the following power: $(256 + 32 + 16 + 4 + 0.254)/1024$. Adding those together, we get $308.254/1024$. Dividing, we get $0.30103$, so we know that the $\log_{10} 2 = 0.30103$, which happens to be right to $5$ figures! This is how logarithms were originally computed by Mr. Briggs of Halifax, in 1620. He said, “I computed successively $54$ square roots of $10$.” We know he really computed only the first $27$, because the rest of them can be obtained by this trick with $\Delta$. His work involved calculating the square root of $10$ twenty-seven times, which is not much more than the ten times we did; however, it was more work because he calculated to sixteen decimal places, and then reduced his answer to fourteen when he published it, so that there were no rounding errors. He made tables of logarithms to fourteen decimal places by this method, which is quite tedious. But all logarithm tables for three hundred years were borrowed from Mr. Briggs’ tables by reducing the number of decimal places. Only in modern times, with the WPA and computing machines, have new tables been independently computed. There are much more efficient methods of computing logarithms today, using certain series expansions. In the above process, we discovered something rather interesting, and that is that for very small powers $\epsilon$ we can calculate $10^\epsilon$ easily; we have discovered that $10^\epsilon = 1+ 2.3025\epsilon$, by sheer numerical analysis. Of course this also means that $10^{n/2.3025} = 1+ n$ if $n$ is very small. Now logarithms to any other base are merely multiples of logarithms to the base $10$. The base $10$ was used only because we have $10$ fingers, and the arithmetic of it is easy, but if we ask for a mathematically natural base, one that has nothing to do with the number of fingers on human beings, we might try to change our scale of logarithms in some convenient and natural manner, and the method which people have chosen is to redefine the logarithms by multiplying all the logarithms to the base $10$ by $2.3025\dots$ This then corresponds to using some other base, and this is called the natural base, or base $e$. Note that $\log_e (1 + n) \approx n$, or $e^n \approx 1+ n$ as $n\to0$. It is easy enough to find out what $e$ is: $e = 10^{1/2.3025\dots}$ or $10^{0.434310\dots}$, an irrational power. Our table of the successive square roots of $10$ can be used to compute, not just logarithms, but also $10$ to any power, so let us use it to calculate this natural base $e$. For convenience we transform $0.434310\dots$ into $444.73/1024$. Now, $444.73$ is $256 + 128 + 32 + 16 + 8 + 4 + 0.73$. Therefore $e$, since it is an exponent of a sum, will be a product of the numbers \begin{equation*} (1.77828)\!(1.33352)\!(1.074607)\!(1.036633)\!(1.018152)\! (1.009035)\!(1.001643) = 2.7184. \end{equation*} \begin{align*} (1.&77828)\!(1.33352)\!(1.074607)\!(1.036633)\;\times\\ &(1.018152)\!(1.009035)\!(1.001643)= 2.7184. \end{align*} (The only problem is the last one, which is $0.73$, and which is not in the table, but we know that if $\Delta$ is small enough, the answer is $1 + 0.0022486\,\Delta$.) When we multiply all these together, we get $2.7184$ (it should be $2.7183$, but it is good enough). The use of such tables, then, is the way in which irrational powers and the logarithms of irrational numbers are all calculated. That takes care of the irrationals.
1
22
Algebra
5
Complex numbers
Now it turns out that after all that work we still cannot solve every equation! For instance, what is the square root of $-1$? Suppose we have to find $x^2 =-1$. The square of no rational, of no irrational, of nothing that we have discovered so far, is equal to $-1$. So we again have to generalize our numbers to a still wider class. Let us suppose that a specific solution of $x^2 =-1$ is called something, we shall call it $i$; $i$ has the property, by definition, that its square is $-1$. That is about all we are going to say about it; of course, there is more than one root of the equation $x^2 =-1$. Someone could write $i$, but another could say, “No, I prefer $-i$. My $i$ is minus your $i$.” It is just as good a solution, and since the only definition that $i$ has is that $i^2=-1$, it must be true that any equation we can write is equally true if the sign of $i$ is changed everywhere. This is called taking the complex conjugate. Now we are going to make up numbers by adding successive $i$’s, and multiplying $i$’s by numbers, and adding other numbers, and so on, according to all of our rules. In this way we find that numbers can all be written in the form $p + iq$, where $p$ and $q$ are what we call real numbers, i.e., the numbers we have been defining up until now. The number $i$ is called the unit imaginary number. Any real multiple of $i$ is called pure imaginary. The most general number, $a$, is of the form $p+iq$ and is called a complex number. Things do not get any worse if, for instance, we multiply two such numbers, let us say $(r + is)(p + iq)$. Then, using the rules, we get \begin{align} (r + is)(p + iq) &= rp + r(iq) + (is)p + (is)(iq)\notag\\[1ex] &= rp + i(rq) + i(sp) + (ii)(sq)\notag\\[1ex] \label{Eq:I:22:4} &= (rp - sq) + i(rq + sp), \end{align} since $ii =$ $i^2 =$ $-1$. Therefore all the numbers that now belong in the rules (22.1) have this mathematical form. Now you say, “This can go on forever! We have defined powers of imaginaries and all the rest, and when we are all finished, somebody else will come along with another equation which cannot be solved, like $x^6 + 3x^2 =-2$. Then we have to generalize all over again!” But it turns out that with this one more invention, just the square root of $-1$, every algebraic equation can be solved! This is a fantastic fact, which we must leave to the Mathematics Department to prove. The proofs are very beautiful and very interesting, but certainly not self-evident. In fact, the most obvious supposition is that we are going to have to invent again and again and again. But the greatest miracle of all is that we do not. This is the last invention. After this invention of complex numbers, we find that the rules still work with complex numbers, and we are finished inventing new things. We can find the complex power of any complex number, we can solve any equation that is written algebraically, in terms of a finite number of those symbols. We do not find any new numbers. The square root of $i$, for instance, has a definite result, it is not something new; and $i^i$ is something. We will discuss that now. We have already discussed multiplication, and addition is also easy; if we add two complex numbers, $(p + iq) + (r + is)$, the answer is $(p + r) + i(q + s)$. Now we can add and multiply complex numbers. But the real problem, of course, is to compute complex powers of complex numbers. It turns out that the problem is actually no more difficult than computing complex powers of real numbers. So let us concentrate now on the problem of calculating $10$ to a complex power, not just an irrational power, but $10^{(r+is)}$. Of course, we must at all times use our rules (22.1) and (22.2). Thus \begin{equation} \label{Eq:I:22:5} 10^{(r+is)}=10^r10^{is}. \end{equation} But $10^r$ we already know how to compute, and we can always multiply anything by anything else; therefore the problem is to compute only $10^{is}$. Let us call it some complex number, $x + iy$. Problem: given $s$, find $x$, find $y$. Now if \begin{equation*} 10^{is}=x+iy, \end{equation*} then the complex conjugate of this equation must also be true, so that \begin{equation*} 10^{-is}=x-iy. \end{equation*} (Thus we see that we can deduce a number of things without actually computing anything, by using our rules.) We deduce another interesting thing by multiplying these together: \begin{equation} \label{Eq:I:22:6} 10^{is}10^{-is}=10^0=1=(x+iy)(x-iy)=x^2+y^2. \end{equation} \begin{equation} \begin{gathered} \label{Eq:I:22:6} 10^{is}10^{-is}=10^0=1\\ =(x+iy)(x-iy)=x^2+y^2. \end{gathered} \end{equation} Thus if we find $x$, we have $y$ also. Now the problem is how to compute $10$ to an imaginary power. What guide is there? We may work over our rules until we can go no further, but here is a reasonable guide: if we can compute it for any particular $s$, we can get it for all the rest. If we know $10^{is}$ for any one $s$ and then we want it for twice that $s$, we can square the number, and so on. But how can we find $10^{is}$ for even one special value of $s$? To do so we shall make one additional assumption, which is not quite in the category of all the other rules, but which leads to reasonable results and permits us to make progress: when the power is small, we shall suppose that the “law” $10^\epsilon = 1+ 2.3025\epsilon$ is right, as $\epsilon$ gets very small, not only for real $\epsilon$, but for complex $\epsilon$ as well. Therefore, we begin with the supposition that this law is true in general, and that tells us that $10^{is} = 1+ 2.3025\cdot is$, for $s\to0$. So we assume that if $s$ is very small, say one part in $1024$, we have a rather good approximation to $10^{is}$. Now we make a table by which we can compute all the imaginary powers of $10$, that is, compute $x$ and $y$. It is done as follows. The first power we start with is the $1/1024$ power, which we presume is very nearly $1+ 2.3025i/1024$. Thus we start with \begin{equation} \label{Eq:I:22:7} 10^{i/1024}=1.00000+0.0022486i, \end{equation} and if we keep multiplying the number by itself, we can get to a higher imaginary power. In fact, we may just reverse the procedure we used in making our logarithm table, and calculate the square, $4$th power, $8$th power, etc., of (22.7), and thus build up the values shown in Table 22–3. We notice an interesting thing, that the $x$ numbers are positive at first, but then swing negative. We shall look into that a little bit more in a moment. But first we may be curious to find for what number $s$ the real part of $10^{is}$ is zero. The $y$-value would be $1$, and so we would have $10^{is} = 1i$, or $is = \log_{10} i$. As an example of how to use this table, just as we calculated $\log_{10} 2$ before, let us now use Table 22–3 to find $\log_{10} i$. Which of the numbers in Table 22–3 do we have to multiply together to get a pure imaginary result? After a little trial and error, we discover that to reduce $x$ the most, it is best to multiply “$512$” by “$128$.” This gives $0.13056 + 0.99159i$. Then we discover that we should multiply this by a number whose imaginary part is about equal to the size of the real part we are trying to remove. Thus we choose “$64$” whose $y$-value is $0.14349$, since that is closest to $0.13056$. This then gives $-0.01308 + 1.00008i$. Now we have overshot, and must divide by $0.99996 + 0.00900i$. How do we do that? By changing the sign of $i$ and multiplying by $0.99996 - 0.00900i$ (which works if $x^2 + y^2 = 1$). Continuing in this way, we find that the entire power to which $10$ must be raised to give $i$ is $i(512 + 128 + 64 - 4 - 2 + 0.20)/1024$, or $698.20i/1024$. If we raise $10$ to that power, we can get $i$. Therefore $\log_{10} i = 0.68184i$.
1
22
Algebra
6
Imaginary exponents
To further investigate the subject of taking complex imaginary powers, let us look at the powers of $10$ taking successive powers, not doubling the power each time, in order to follow Table 22–3 further and to see what happens to those minus signs. This is shown in Table 22–4, in which we take $10^{i/8}$, and just keep multiplying it. We see that $x$ decreases, passes through zero, swings almost to $-1$ (if we could get in between $p = 10$ and $p = 11$ it would obviously swing to $-1$), and swings back. The $y$-value is going back and forth too. In Fig. 22–1 the dots represent the numbers that appear in Table 22–4, and the lines are just drawn to help you visually. So we see that the numbers $x$ and $y$ oscillate; $10^{is}$ repeats itself, it is a periodic thing, and as such, it is easy enough to explain, because if a certain power is $i$, then the fourth power of that would be $i^2$ squared. It would be $+1$ again, and therefore, since $10^{0.68i}$ is equal to $i$, by taking the fourth power we discover that $10^{2.72i}$ is equal to $+1$. Therefore, if we wanted $10^{3.00i}$, for instance, we could write it as $10^{2.72i}$ times $10^{0.28i}$. In other words, it has a period, it repeats. Of course, we recognize what the curves look like! They look like the sine and cosine, and we shall call them, for a while, the algebraic sine and algebraic cosine. However, instead of using the base $10$, we shall put them into our natural base, which only changes the horizontal scale; so we denote $2.3025s$ by $t$, and write $10^{is} = e^{it}$, where $t$ is a real number. Now $e^{it} = x + iy$, and we shall write this as the algebraic cosine of $t$ plus $i$ times the algebraic sine of $t$. Thus \begin{equation} \label{Eq:I:22:8} e^{it}=\operatorname{\underline{\cos}}t+ i\operatorname{\underline{\sin}}t. \end{equation} What are the properties of $\operatorname{\underline{\cos}} t$ and $\operatorname{\underline{\sin}} t$? First, we know, for instance, that $x^2 + y^2$ must be $1$; we have proved that before, and it is just as true for base $e$ as for base $10$. Therefore $\operatorname{\underline{\cos}}^2 t+ \operatorname{\underline{\sin}}^2 t= 1$. We also know that, for small $t$, $e^{it} = 1+it$, and therefore $\operatorname{\underline{\cos}} t$ is nearly $1$, and $\operatorname{\underline{\sin}} t$ is nearly $t$, and so it goes, that all of the various properties of these remarkable functions, which come from taking imaginary powers, are the same as the sine and cosine of trigonometry. Is the period the same? Let us find out. $e$ to what power is equal to $i$? What is the logarithm of $i$ to the base $e$? We worked it out before, in the base $10$ it was $0.68184i$, but when we change our logarithmic scale to $e$, we have to multiply by $2.3025$, and if we do that it comes out $1.570$. So this will be called “algebraic $\pi/2$.” But, we see, it differs from the regular $\pi/2$ by only one place in the last point, and that, of course, is the result of errors in our arithmetic! So we have created two new functions in a purely algebraic manner, the cosine and the sine, which belong to algebra, and only to algebra. We wake up at the end to discover the very functions that are natural to geometry. So there is a connection, ultimately, between algebra and geometry. We summarize with this, the most remarkable formula in mathematics: \begin{equation} \label{Eq:I:22:9} e^{i\theta}=\cos\theta+i\sin\theta. \end{equation} This is our jewel. We may relate the geometry to the algebra by representing complex numbers in a plane; the horizontal position of a point is $x$, the vertical position of a point is $y$ (Fig. 22–2). We represent every complex number, $x+iy$. Then if the radial distance to this point is called $r$ and the angle is called $\theta$, the algebraic law is that $x+iy$ is written in the form $re^{i\theta}$, where the geometrical relationships between $x$, $y$, $r$, and $\theta$ are as shown. This, then, is the unification of algebra and geometry. When we began this chapter, armed only with the basic notions of integers and counting, we had little idea of the power of the processes of abstraction and generalization. Using the set of algebraic “laws,” or properties of numbers, Eq. (22.1), and the definitions of inverse operations (22.2), we have been able here, ourselves, to manufacture not only numbers but useful things like tables of logarithms, powers, and trigonometric functions (for these are what the imaginary powers of real numbers are), all merely by extracting ten successive square roots of ten!
1
23
Resonance
1
Complex numbers and harmonic motion
In the present chapter we shall continue our discussion of the harmonic oscillator and, in particular, the forced harmonic oscillator, using a new technique in the analysis. In the preceding chapter we introduced the idea of complex numbers, which have real and imaginary parts and which can be represented on a diagram in which the ordinate represents the imaginary part and the abscissa represents the real part. If $a$ is a complex number, we may write it as $a = a_r + ia_i$, where the subscript $r$ means the real part of $a$, and the subscript $i$ means the imaginary part of $a$. Referring to Fig. 23–1, we see that we may also write a complex number $a = x + iy$ in the form $x + iy = re^{i\theta}$, where $r^2 = x^2 + y^2 = (x + iy)(x - iy) = aa\cconj$. (The complex conjugate of $a$, written $a\cconj$, is obtained by reversing the sign of $i$ in $a$.) So we shall represent a complex number in either of two forms, a real plus an imaginary part, or a magnitude $r$ and a phase angle $\theta$, so-called. Given $r$ and $\theta$, $x$ and $y$ are clearly $r\cos\theta$ and $r\sin\theta$ and, in reverse, given a complex number $x + iy$, $r = \sqrt{x^2 + y^2}$ and $\tan\theta= y/x$, the ratio of the imaginary to the real part. We are going to apply complex numbers to our analysis of physical phenomena by the following trick. We have examples of things that oscillate; the oscillation may have a driving force which is a certain constant times $\cos\omega t$. Now such a force, $F = F_0\cos\omega t$, can be written as the real part of a complex number $F = F_0e^{i\omega t}$ because $e^{i\omega t} = \cos\omega t + i\sin\omega t$. The reason we do this is that it is easier to work with an exponential function than with a cosine. So the whole trick is to represent our oscillatory functions as the real parts of certain complex functions. The complex number $F$ that we have so defined is not a real physical force, because no force in physics is really complex; actual forces have no imaginary part, only a real part. We shall, however, speak of the “force” $F_0e^{i\omega t}$, but of course the actual force is the real part of that expression. Let us take another example. Suppose we want to represent a force which is a cosine wave that is out of phase with a delayed phase $\Delta $. This, of course, would be the real part of $F_0e^{i(\omega t-\Delta)}$, but exponentials being what they are, we may write $e^{i(\omega t-\Delta)}= e^{i\omega t}e^{-i\Delta}$. Thus we see that the algebra of exponentials is much easier than that of sines and cosines; this is the reason we choose to use complex numbers. We shall often write \begin{equation} \label{Eq:I:23:1} F=F_0e^{-i\Delta}e^{i\omega t}=\hat{F}e^{i\omega t}. \end{equation} We write a little caret ($\hat{\enspace}$) over the $F$ to remind ourselves that this quantity is a complex number: here the number is \begin{equation*} \hat{F}=F_0e^{-i\Delta}. \end{equation*} Now let us solve an equation, using complex numbers, to see whether we can work out a problem for some real case. For example, let us try to solve \begin{equation} \label{Eq:I:23:2} \frac{d^2x}{dt^2}+\frac{kx}{m}=\frac{F}{m}=\frac{F_0}{m}\cos\omega t, \end{equation} where $F$ is the force which drives the oscillator and $x$ is the displacement. Now, absurd though it may seem, let us suppose that $x$ and $F$ are actually complex numbers, for a mathematical purpose only. That is to say, $x$ has a real part and an imaginary part times $i$, and $F$ has a real part and an imaginary part times $i$. Now if we had a solution of (23.2) with complex numbers, and substituted the complex numbers in the equation, we would get \begin{equation*} \frac{d^2(x_r+ix_i)}{dt^2}+\frac{k(x_r+ix_i)}{m}= \frac{F_r+iF_i}{m} \end{equation*} or \begin{equation*} \frac{d^2x_r}{dt^2}+\frac{kx_r}{m}+i\biggl( \frac{d^2x_i}{dt^2}+\frac{kx_i}{m}\biggr)= \frac{F_r}{m}+\frac{iF_i}{m}. \end{equation*} Now, since if two complex numbers are equal, their real parts must be equal and their imaginary parts must be equal, we deduce that the real part of $x$ satisfies the equation with the real part of the force. We must emphasize, however, that this separation into a real part and an imaginary part is not valid in general, but is valid only for equations which are linear, that is, for equations in which $x$ appears in every term only in the first power or the zeroth power. For instance, if there were in the equation a term $\lambda x^2$, then when we substitute $x_r + ix_i$, we would get $\lambda(x_r + ix_i)^2$, but when separated into real and imaginary parts this would yield $\lambda(x_r^2 - x_i^2)$ as the real part and $2i\lambda x_rx_i$ as the imaginary part. So we see that the real part of the equation would not involve just $\lambda x_r^2$, but also $-\lambda x_i^2$. In this case we get a different equation than the one we wanted to solve, with $x_i$, the completely artificial thing we introduced in our analysis, mixed in. Let us now try our new method for the problem of the forced oscillator, that we already know how to solve. We want to solve Eq. (23.2) as before, but we say that we are going to try to solve \begin{equation} \label{Eq:I:23:3} \frac{d^2x}{dt^2}+\frac{kx}{m}=\frac{\hat{F}e^{i\omega t}}{m}, \end{equation} where $\hat{F}e^{i\omega t}$ is a complex number. Of course $x$ will also be complex, but remember the rule: take the real part to find out what is really going on. So we try to solve (23.3) for the forced solution; we shall discuss other solutions later. The forced solution has the same frequency as the applied force, and has some amplitude of oscillation and some phase, and so it can be represented also by some complex number $\hat{x}$ whose magnitude represents the swing of $x$ and whose phase represents the time delay in the same way as for the force. Now a wonderful feature of an exponential function $x=\hat{x}e^{i\omega t}$ is that $dx/dt = i\omega x$. When we differentiate an exponential function, we bring down the exponent as a simple multiplier. The second derivative does the same thing, it brings down another $i\omega$, and so it is very simple to write immediately, by inspection, what the equation is for $x$: every time we see a differentiation, we simply multiply by $i\omega$. (Differentiation is now as easy as multiplication! This idea of using exponentials in linear differential equations is almost as great as the invention of logarithms, in which multiplication is replaced by addition. Here differentiation is replaced by multiplication.) Thus our equation becomes \begin{equation} \label{Eq:I:23:4} (i\omega)^2\hat{x}+(k\hat{x}/m)=\hat{F}/m. \end{equation} (We have cancelled the common factor $e^{i\omega t}$.) See how simple it is! Differential equations are immediately converted, by sight, into mere algebraic equations; we virtually have the solution by sight, that \begin{equation*} \hat{x}=\frac{\hat{F}/m}{(k/m)-\omega^2}, \end{equation*} since $(i\omega)^2 =-\omega^2$. This may be slightly simplified by substituting $k/m = \omega_0^2$, which gives \begin{equation} \label{Eq:I:23:5} \hat{x}=\hat{F}/m(\omega_0^2-\omega^2). \end{equation} This, of course, is the solution we had before; for since $m(\omega_0^2 - \omega^2)$ is a real number, the phase angles of $\hat{F}$ and of $\hat{x}$ are the same (or perhaps $180^\circ$ apart, if $\omega^2 > \omega_0^2$), as advertised previously. The magnitude of $\hat{x}$, which measures how far it oscillates, is related to the size of the $\hat{F}$ by the factor $1/m(\omega_0^2-\omega^2)$, and this factor becomes enormous when $\omega$ is nearly equal to $\omega_0$. So we get a very strong response when we apply the right frequency $\omega$ (if we hold a pendulum on the end of a string and shake it at just the right frequency, we can make it swing very high).
1
23
Resonance
2
The forced oscillator with damping
That, then, is how we analyze oscillatory motion with the more elegant mathematical technique. But the elegance of the technique is not at all exhibited in such a problem that can be solved easily by other methods. It is only exhibited when one applies it to more difficult problems. Let us therefore solve another, more difficult problem, which furthermore adds a relatively realistic feature to the previous one. Equation (23.5) tells us that if the frequency $\omega$ were exactly equal to $\omega_0$, we would have an infinite response. Actually, of course, no such infinite response occurs because some other things, like friction, which we have so far ignored, limits the response. Let us therefore add to Eq. (23.2) a friction term. Ordinarily such a problem is very difficult because of the character and complexity of the frictional term. There are, however, many circumstances in which the frictional force is proportional to the speed with which the object moves. An example of such friction is the friction for slow motion of an object in oil or a thick liquid. There is no force when it is just standing still, but the faster it moves the faster the oil has to go past the object, and the greater is the resistance. So we shall assume that there is, in addition to the terms in (23.2), another term, a resistance force proportional to the velocity: $F_f =-c\,dx/dt$. It will be convenient, in our mathematical analysis, to write the constant $c$ as $m$ times $\gamma$ to simplify the equation a little. This is just the same trick we use with $k$ when we replace it by $m\omega_0^2$, just to simplify the algebra. Thus our equation will be \begin{equation} \label{Eq:I:23:6} m(d^2x/dt^2)+c(dx/dt)+kx=F \end{equation} or, writing $c = m\gamma$ and $k = m\omega_0^2$ and dividing out the mass $m$, \begin{equation*} \label{Eq:I:23:6a} (d^2x/dt^2)+\gamma(dx/dt)+\omega_0^2x=F/m. \tag{23.6a} \end{equation*} Now we have the equation in the most convenient form to solve. If $\gamma$ is very small, that represents very little friction; if $\gamma$ is very large, there is a tremendous amount of friction. How do we solve this new linear differential equation? Suppose that the driving force is equal to $F_0\cos\,(\omega t+\Delta)$; we could put this into (23.6a) and try to solve it, but we shall instead solve it by our new method. Thus we write $F$ as the real part of $\hat{F}e^{i\omega t}$ and $x$ as the real part of $\hat{x}e^{i\omega t}$, and substitute these into Eq. (23.6a). It is not even necessary to do the actual substituting, for we can see by inspection that the equation would become \begin{equation} \label{Eq:I:23:7} [(i\omega)^2\hat{x}+\gamma(i\omega)\hat{x}+\omega_0^2\hat{x}] e^{i\omega t}=(\hat{F}/m)e^{i\omega t}. \end{equation} [As a matter of fact, if we tried to solve Eq. (23.6a) by our old straightforward way, we would really appreciate the magic of the “complex” method.] If we divide by $e^{i\omega t}$ on both sides, then we can obtain the response $\hat{x}$ to the given force $\hat{F}$; it is \begin{equation} \label{Eq:I:23:8} \hat{x}=\hat{F}/m(\omega_0^2-\omega^2+i\gamma\omega). \end{equation} Thus again $\hat{x}$ is given by $\hat{F}$ times a certain factor. There is no technical name for this factor, no particular letter for it, but we may call it $R$ for discussion purposes: \begin{equation} R=\frac{1}{m(\omega_0^2-\omega^2+i\gamma\omega)}\notag \end{equation} and \begin{equation} \label{Eq:I:23:9} \hat{x}=\hat{F}R. \end{equation} (Although the letters $\gamma$ and $\omega_0$ are in very common use, this $R$ has no particular name.) This factor $R$ can either be written as $p+iq$, or as a certain magnitude $\rho$ times $e^{i\theta}$. If it is written as a certain magnitude times $e^{i\theta}$, let us see what it means. Now $\hat{F}= F_0e^{i\Delta}$, and the actual force $F$ is the real part of $F_0e^{i\Delta}e^{i\omega t}$, that is, $F_0 \cos\,(\omega t + \Delta)$. Next, Eq. (23.9) tells us that $\hat{x}$ is equal to $\hat{F}R$. So, writing $R = \rho e^{i\theta}$ as another name for $R$, we get \begin{equation*} \hat{x}=R\hat{F}=\rho e^{i\theta}F_0e^{i\Delta}=\rho F_0e^{i(\theta+\Delta)}. \end{equation*} Finally, going even further back, we see that the physical $x$, which is the real part of the complex $\hat{x}e^{i\omega t}$, is equal to the real part of $\rho F_0e^{i(\theta+\Delta)}e^{i\omega t}$. But $\rho$ and $F_0$ are real, and the real part of $e^{i(\theta+\Delta+\omega t)}$ is simply $\cos\,(\omega t + \Delta + \theta)$. Thus \begin{equation} \label{Eq:I:23:10} x=\rho F_0\cos\,(\omega t+\Delta+\theta). \end{equation} This tells us that the amplitude of the response is the magnitude of the force $F$ multiplied by a certain magnification factor, $\rho$; this gives us the “amount” of oscillation. It also tells us, however, that $x$ is not oscillating in phase with the force, which has the phase $\Delta$, but is shifted by an extra amount $\theta$. Therefore $\rho$ and $\theta$ represent the size of the response and the phase shift of the response. Now let us work out what $\rho$ is. If we have a complex number, the square of the magnitude is equal to the number times its complex conjugate; thus \begin{equation} \begin{aligned} \rho^2&=\frac{1}{m^2(\omega_0^2-\omega^2+i\gamma\omega) (\omega_0^2-\omega^2-i\gamma\omega)}\\[1ex] &=\frac{1}{m^2[(\omega_0^2-\omega^2)^2+\gamma^2\omega^2]}. \end{aligned} \label{Eq:I:23:11} \end{equation} In addition, the phase angle $\theta$ is easy to find, for if we write \begin{equation} 1/R=1/\rho e^{i\theta}=(1/\rho)e^{-i\theta}= m(\omega_0^2-\omega^2+i\gamma\omega),\notag \end{equation} we see that \begin{equation} \label{Eq:I:23:12} \tan\theta=-\gamma\omega/(\omega_0^2-\omega^2). \end{equation} It is minus because $\tan (-\theta) =-\tan\theta$. A negative value for $\theta$ results for all $\omega$, and this corresponds to the displacement $x$ lagging the force $F$. Figure 23–2 shows how $\rho^2$ varies as a function of frequency ($\rho^2$ is physically more interesting than $\rho$, because $\rho^2$ is proportional to the square of the amplitude, or more or less to the energy that is developed in the oscillator by the force). We see that if $\gamma$ is very small, then $1/(\omega_0^2 - \omega^2)^2$ is the most important term, and the response tries to go up toward infinity when $\omega$ equals $\omega_0$. Now the “infinity” is not actually infinite because if $\omega=\omega_0$, then $1/\gamma^2\omega^2$ is still there. The phase shift varies as shown in Fig. 23–3. In certain circumstances we get a slightly different formula than (23.8), also called a “resonance” formula, and one might think that it represents a different phenomenon, but it does not. The reason is that if $\gamma$ is very small the most interesting part of the curve is near $\omega = \omega_0$, and we may replace (23.8) by an approximate formula which is very accurate if $\gamma$ is small and $\omega$ is near $\omega_0$. Since $\omega_0^2 - \omega^2 = (\omega_0-\omega)(\omega_0 + \omega)$, if $\omega$ is near $\omega_0$ this is nearly the same as $2\omega_0(\omega_0 - \omega)$ and $\gamma\omega$ is nearly the same as $\gamma\omega_0$. Using these in (23.8), we see that $\omega_0^2-\omega^2 + i\gamma\omega \approx 2\omega_0(\omega_0-\omega+i\gamma/2)$, so that \begin{equation} \begin{gathered} \hat{x}\approx \hat{F}/2m\omega_0(\omega_0-\omega+i\gamma/2)\\[.5ex] \text{ if }\gamma\ll\omega_0\text{ and } \omega\approx\omega_0. \end{gathered} \label{Eq:I:23:13} \end{equation} It is easy to find the corresponding formula for $\rho^2$. It is \begin{equation*} \rho^2\approx 1/4m^2\omega_0^2[(\omega_0-\omega)^2+\gamma^2/4]. \end{equation*} We shall leave it to the student to show the following: if we call the maximum height of the curve of $\rho^2$ vs. $\omega$ one unit, and we ask for the width $\Delta\omega$ of the curve, at one half the maximum height, the full width at half the maximum height of the curve is $\Delta\omega=\gamma$, supposing that $\gamma$ is small. The resonance is sharper and sharper as the frictional effects are made smaller and smaller. As another measure of the width, some people use a quantity $Q$ which is defined as $Q = \omega_0/\gamma$. The narrower the resonance, the higher the $Q$: $Q= 1000$ means a resonance whose width is only $1000$th of the frequency scale. The $Q$ of the resonance curve shown in Fig. 23–2 is $5$. The importance of the resonance phenomenon is that it occurs in many other circumstances, and so the rest of this chapter will describe some of these other circumstances.
1
23
Resonance
3
Electrical resonance
The simplest and broadest technical applications of resonance are in electricity. In the electrical world there are a number of objects which can be connected to make electric circuits. These passive circuit elements, as they are often called, are of three main types, although each one has a little bit of the other two mixed in. Before describing them in greater detail, let us note that the whole idea of our mechanical oscillator being a mass on the end of a spring is only an approximation. All the mass is not actually at the “mass”; some of the mass is in the inertia of the spring. Similarly, all of the spring is not at the “spring”; the mass itself has a little elasticity, and although it may appear so, it is not absolutely rigid, and as it goes up and down, it flexes ever so slightly under the action of the spring pulling it. The same thing is true in electricity. There is an approximation in which we can lump things into “circuit elements” which are assumed to have pure, ideal characteristics. It is not the proper time to discuss that approximation here, we shall simply assume that it is true in the circumstances. The three main kinds of circuit elements are the following. The first is called a capacitor (Fig. 23–4); an example is two plane metallic plates spaced a very small distance apart by an insulating material. When the plates are charged there is a certain voltage difference, that is, a certain difference in potential, between them. The same difference of potential appears between the terminals $A$ and $B$, because if there were any difference along the connecting wire, electricity would flow right away. So there is a certain voltage difference $V$ between the plates if there is a certain electric charge $+q$ and $-q$ on them, respectively. Between the plates there will be a certain electric field; we have even found a formula for it (Chapters 13 and 14): \begin{equation} \label{Eq:I:23:14} V=\sigma d/\epsO=qd/\epsO A, \end{equation} where $d$ is the spacing and $A$ is the area of the plates. Note that the potential difference is a linear function of the charge. If we do not have parallel plates, but insulated electrodes which are of any other shape, the difference in potential is still precisely proportional to the charge, but the constant of proportionality may not be so easy to compute. However, all we need to know is that the potential difference across a capacitor is proportional to the charge: $V = q/C$; the proportionality constant is $1/C$, where $C$ is the capacitance of the object. The second kind of circuit element is called a resistor; it offers resistance to the flow of electrical current. It turns out that metallic wires and many other substances resist the flow of electricity in this manner: if there is a voltage difference across a piece of some substance, there exists an electric current $I= dq/dt$ that is proportional to the electric voltage difference: \begin{equation} \label{Eq:I:23:15} V=RI=R\,dq/dt \end{equation} The proportionality coefficient is called the resistance $R$. This relationship may already be familiar to you; it is Ohm’s law. If we think of the charge $q$ on a capacitor as being analogous to the displacement $x$ of a mechanical system, we see that the current, $I = dq/dt$, is analogous to velocity, $1/C$ is analogous to a spring constant $k$, and $R$ is analogous to the resistive coefficient $c=m\gamma$ in Eq. (23.6). Now it is very interesting that there exists another circuit element which is the analog of mass! This is a coil which builds up a magnetic field within itself when there is a current in it. A changing magnetic field develops in the coil a voltage that is proportional to $dI/dt$ (this is how a transformer works, in fact). The magnetic field is proportional to a current, and the induced voltage (so-called) in such a coil is proportional to the rate of change of the current: \begin{equation} \label{Eq:I:23:16} V=L\,dI/dt=L\,d^2q/dt^2. \end{equation} The coefficient $L$ is the self-inductance, and is analogous to the mass in a mechanical oscillating circuit. Suppose we make a circuit in which we have connected the three circuit elements in series (Fig. 23–5); then the voltage across the whole thing from $1$ to $2$ is the work done in carrying a charge through, and it consists of the sum of several pieces: across the inductor, $V_L = L\,d^2q/dt^2$; across the resistance, $V_R = R\,dq/dt$; across the capacitor, $V_C = q/C$. The sum of these is equal to the applied voltage, $V$: \begin{equation} \label{Eq:I:23:17} L\,d^2q/dt^2+R\,dq/dt+q/C=V(t). \end{equation} Now we see that this equation is exactly the same as the mechanical equation (23.6), and of course it can be solved in exactly the same manner. We suppose that $V(t)$ is oscillatory: we are driving the circuit with a generator with a pure sine wave oscillation. Then we can write our $V(t)$ as a complex $\hat{V}$ with the understanding that it must be ultimately multiplied by $e^{i\omega t}$, and the real part taken in order to find the true $V$. Likewise, the charge $q$ can thus be analyzed, and then in exactly the same manner as in Eq. (23.8) we write the corresponding equation: the second derivative of $q$ is $(i\omega)^2q$; the first derivative is $(i\omega)q$. Thus Eq. (23.17) translates to \begin{equation*} \biggl[L(i\omega)^2+R(i\omega)+\frac{1}{C}\biggr]\hat{q}=\hat{V} \end{equation*} or \begin{equation*} \hat{q}=\frac{\hat{V}} {L(i\omega)^2+R(i\omega)+\dfrac{1}{C}} \end{equation*} which we can write in the form \begin{equation} \label{Eq:I:23:18} \hat{q}=\hat{V}/L(\omega_0^2-\omega^2+i\gamma\omega), \end{equation} where $\omega_0^2 = 1/LC$ and $\gamma= R/L$. It is exactly the same denominator as we had in the mechanical case, with exactly the same resonance properties! The correspondence between the electrical and mechanical cases is outlined in Table 23–1. We must mention a small technical point. In the electrical literature, a different notation is used. (From one field to another, the subject is not really any different, but the way of writing the notations is often different.) First, $j$ is commonly used instead of $i$ in electrical engineering, to denote $\sqrt{-1}$. (After all, $i$ must be the current!) Also, the engineers would rather have a relationship between $\hat{V}$ and $\hat{I}$ than between $\hat{V}$ and $\hat{q}$, just because they are more used to it that way. Thus, since $I= dq/dt = i\omega q$, we can just substitute $\hat{I}/i\omega$ for $\hat{q}$ and get \begin{equation} \label{Eq:I:23:19} \hat{V}=(i\omega L+R+1/i\omega C)\hat{I}=\hat{Z}\hat{I}. \end{equation} Another way is to rewrite Eq. (23.17), so that it looks more familiar; one often sees it written this way: \begin{equation} \label{Eq:I:23:20} L\,dI/dt+RI+(1/C)\int^tI\,dt=V(t). \end{equation} At any rate, we find the relation (23.19) between voltage $\hat{V}$ and current $\hat{I}$ which is just the same as (23.18) except divided by $i\omega$, and that produces Eq. (23.19). The quantity $R + i\omega L + 1/i\omega C$ is a complex number, and is used so much in electrical engineering that it has a name: it is called the complex impedance, $\hat{Z}$. Thus we can write $\hat{V}=\hat{Z}\hat{I}$. The reason that the engineers like to do this is that they learned something when they were young: $V = RI$ for resistances, when they only knew about resistances and dc. Now they have become more educated and have ac circuits, so they want the equation to look the same. Thus they write $\hat{V}=\hat{Z}\hat{I}$, the only difference being that the resistance is replaced by a more complicated thing, a complex quantity. So they insist that they cannot use what everyone else in the world uses for imaginary numbers, they have to use a $j$ for that; it is a miracle that they did not insist also that the letter $Z$ be an $R$! (Then they get into trouble when they talk about current densities, for which they also use $j$. The difficulties of science are to a large extent the difficulties of notations, the units, and all the other artificialities which are invented by man, not by nature.)
1
23
Resonance
4
Resonance in nature
Although we have discussed the electrical case in detail, we could also bring up case after case in many fields, and show exactly how the resonance equation is the same. There are many circumstances in nature in which something is “oscillating” and in which the resonance phenomenon occurs. We said that in an earlier chapter; let us now demonstrate it. If we walk around our study, pulling books off the shelves and simply looking through them to find an example of a curve that corresponds to Fig. 23–2 and comes from the same equation, what do we find? Just to demonstrate the wide range obtained by taking the smallest possible sample, it takes only five or six books to produce quite a series of phenomena which show resonances. The first two are from mechanics, the first on a large scale: the atmosphere of the whole earth. If the atmosphere, which we suppose surrounds the earth evenly on all sides, is pulled to one side by the moon or, rather, squashed prolate into a double tide, and if we could then let it go, it would go sloshing up and down; it is an oscillator. This oscillator is driven by the moon, which is effectively revolving about the earth; any one component of the force, say in the $x$-direction, has a cosine component, and so the response of the earth’s atmosphere to the tidal pull of the moon is that of an oscillator. The expected response of the atmosphere is shown in Fig. 23–6, curve $b$ (curve $a$ is another theoretical curve under discussion in the book from which this is taken out of context). Now one might think that we only have one point on this resonance curve, since we only have the one frequency, corresponding to the rotation of the earth under the moon, which occurs at a period of $12.42$ hours—$12$ hours for the earth (the tide is a double bump), plus a little more because the moon is going around. But from the size of the atmospheric tides, and from the phase, the amount of delay, we can get both $\rho$ and $\theta$. From those we can get $\omega_0$ and $\gamma$, and thus draw the entire curve! This is an example of very poor science. From two numbers we obtain two numbers, and from those two numbers we draw a beautiful curve, which of course goes through the very point that determined the curve! It is of no use unless we can measure something else, and in the case of geophysics that is often very difficult. But in this particular case there is another thing which we can show theoretically must have the same timing as the natural frequency $\omega_0$: that is, if someone disturbed the atmosphere, it would oscillate with the frequency $\omega_0$. Now there was such a sharp disturbance in 1883; the Krakatoa volcano exploded and half the island blew off, and it made such a terrific explosion in the atmosphere that the period of oscillation of the atmosphere could be measured. It came out to $10\tfrac{1}{2}$ hours. The $\omega_0$ obtained from Fig. 23–6 comes out $10$ hours and $20$ minutes, so there we have at least one check on the reality of our understanding of the atmospheric tides. Next we go to the small scale of mechanical oscillation. This time we take a sodium chloride crystal, which has sodium ions and chlorine ions next to each other, as we described in an early chapter. These ions are electrically charged, alternately plus and minus. Now there is an interesting oscillation possible. Suppose that we could drive all the plus charges to the right and all the negative charges to the left, and let go; they would then oscillate back and forth, the sodium lattice against the chlorine lattice. How can we ever drive such a thing? That is easy, for if we apply an electric field on the crystal, it will push the plus charge one way and the minus charge the other way! So, by having an external electric field we can perhaps get the crystal to oscillate. The frequency of the electric field needed is so high, however, that it corresponds to infrared radiation! So we try to find a resonance curve by measuring the absorption of infrared light by sodium chloride. Such a curve is shown in Fig. 23–7. The abscissa is not frequency, but is given in terms of wavelength, but that is just a technical matter, of course, since for a wave there is a definite relation between frequency and wavelength; so it is really a frequency scale, and a certain frequency corresponds to the resonant frequency. But what about the width? What determines the width? There are many cases in which the width that is seen on the curve is not really the natural width $\gamma$ that one would have theoretically. There are two reasons why there can be a wider curve than the theoretical curve. If the objects do not all have the same frequency, as might happen if the crystal were strained in certain regions, so that in those regions the oscillation frequency were slightly different than in other regions, then what we have is many resonance curves on top of each other; so we apparently get a wider curve. The other kind of width is simply this: perhaps we cannot measure the frequency precisely enough—if we open the slit of the spectrometer fairly wide, so although we thought we had only one frequency, we actually had a certain range $\Delta\omega$, then we may not have the resolving power needed to see a narrow curve. Offhand, we cannot say whether the width in Fig. 23–7 is natural, or whether it is due to inhomogeneities in the crystal or the finite width of the slit of the spectrometer. Now we turn to a more esoteric example, and that is the swinging of a magnet. If we have a magnet, with north and south poles, in a constant magnetic field, the N end of the magnet will be pulled one way and the S end the other way, and there will in general be a torque on it, so it will vibrate about its equilibrium position, like a compass needle. However, the magnets we are talking about are atoms. These atoms have an angular momentum, the torque does not produce a simple motion in the direction of the field, but instead, of course, a precession. Now, looked at from the side, any one component is “swinging,” and we can disturb or drive that swinging and measure an absorption. The curve in Fig. 23–8 represents a typical such resonance curve. What has been done here is slightly different technically. The frequency of the lateral field that is used to drive this swinging is always kept the same, while we would have expected that the investigators would vary that and plot the curve. They could have done it that way, but technically it was easier for them to leave the frequency $\omega$ fixed, and change the strength of the constant magnetic field, which corresponds to changing $\omega_0$ in our formula. They have plotted the resonance curve against $\omega_0$. Anyway, this is a typical resonance with a certain $\omega_0$ and $\gamma$. Now we go still further. Our next example has to do with atomic nuclei. The motions of protons and neutrons in nuclei are oscillatory in certain ways, and we can demonstrate this by the following experiment. We bombard a lithium atom with protons, and we discover that a certain reaction, producing $\gamma$-rays, actually has a very sharp maximum typical of resonance. We note in Fig. 23–9, however, one difference from other cases: the horizontal scale is not a frequency, it is an energy! The reason is that in quantum mechanics what we think of classically as the energy will turn out to be really related to a frequency of a wave amplitude. When we analyze something which in simple large-scale physics has to do with a frequency, we find that when we do quantum-mechanical experiments with atomic matter, we get the corresponding curve as a function of energy. In fact, this curve is a demonstration of this relationship, in a sense. It shows that frequency and energy have some deep interrelationship, which of course they do. Now we turn to another example which also involves a nuclear energy level, but now a much, much narrower one. The $\omega_0$ in Fig. 23–10 corresponds to an energy of $100{,}000$ electron volts, while the width $\gamma$ is approximately $10^{-5}$ electron volt; in other words, this has a $Q$ of $10^{10}$! When this curve was measured it was the largest $Q$ of any oscillator that had ever been measured. It was measured by Dr. Mössbauer, and it was the basis of his Nobel prize. The horizontal scale here is velocity, because the technique for obtaining the slightly different frequencies was to use the Doppler effect, by moving the source relative to the absorber. One can see how delicate the experiment is when we realize that the speed involved is a few centimeters per second! On the actual scale of the figure, zero frequency would correspond to a point about $10^{10}$ cm to the left—slightly off the paper! Finally, if we look in an issue of the Physical Review, say that of January 1, 1962, will we find a resonance curve? Every issue has a resonance curve, and Fig. 23–11 is the resonance curve for this one. This resonance curve turns out to be very interesting. It is the resonance found in a certain reaction among strange particles, a reaction in which a K$^-$ and a proton interact. The resonance is detected by seeing how many of some kinds of particles come out, and depending on what and how many come out, one gets different curves, but of the same shape and with the peak at the same energy. We thus determine that there is a resonance at a certain energy for the K$^-$ meson. That presumably means that there is some kind of a state, or condition, corresponding to this resonance, which can be attained by putting together a K$^-$ and a proton. This is a new particle, or resonance. Today we do not know whether to call a bump like this a “particle” or simply a resonance. When there is a very sharp resonance, it corresponds to a very definite energy, just as though there were a particle of that energy present in nature. When the resonance gets wider, then we do not know whether to say there is a particle which does not last very long, or simply a resonance in the reaction probability. In the second chapter, this point is made about the particles, but when the second chapter was written this resonance was not known, so our chart should now have still another particle in it!
1
24
Transients
1
The energy of an oscillator
Although this chapter is entitled “transients,” certain parts of it are, in a way, part of the last chapter on forced oscillation. One of the features of a forced oscillation which we have not yet discussed is the energy in the oscillation. Let us now consider that energy. In a mechanical oscillator, how much kinetic energy is there? It is proportional to the square of the velocity. Now we come to an important point. Consider an arbitrary quantity $A$, which may be the velocity or something else that we want to discuss. When we write $A = \hat{A}e^{i\omega t}$, a complex number, the true and honest $A$, in the physical world, is only the real part; therefore if, for some reason, we want to use the square of $A$, it is not right to square the complex number and then take the real part, because the real part of the square of a complex number is not just the square of the real part, but also involves the imaginary part. So when we wish to find the energy we have to get away from the complex notation for a while to see what the inner workings are. Now the true physical $A$ is the real part of $A_0e^{i(\omega t+\Delta)}$, that is, $A = A_0 \cos\,(\omega t + \Delta)$, where $\hat{A}$, the complex number, is written as $A_0e^{i\Delta}$. Now the square of this real physical quantity is $A^2 = A_0^2 \cos^2\,(\omega t + \Delta)$. The square of the quantity, then, goes up and down from a maximum to zero, like the square of the cosine. The square of the cosine has a maximum of $1$ and a minimum of $0$, and its average value is $1/2$. In many circumstances we are not interested in the energy at any specific moment during the oscillation; for a large number of applications we merely want the average of $A^2$, the mean of the square of $A$ over a period of time large compared with the period of oscillation. In those circumstances, the average of the cosine squared may be used, so we have the following theorem: if $A$ is represented by a complex number, then the mean of $A^2$ is equal to $\tfrac{1}{2}A_0^2$. Now $A_0^2$ is the square of the magnitude of the complex $\hat{A}$. (This can be written in many ways—some people like to write $\abs{\hat{A}}^2$; others write, $\hat{A}\hat{A}\cconj$, $\hat{A}$ times its complex conjugate.) We shall use this theorem several times. Now let us consider the energy in a forced oscillator. The equation for the forced oscillator is \begin{equation} \label{Eq:I:24:1} m\,d^2x/dt^2+\gamma m\,dx/dt+m\omega_0^2x=F(t). \end{equation} In our problem, of course, $F(t)$ is a cosine function of $t$. Now let us analyze the situation: how much work is done by the outside force $F$? The work done by the force per second, i.e., the power, is the force times the velocity. (We know that the differential work in a time $dt$ is $F\,dx$, and the power is $F\,dx/dt$.) Thus \begin{equation} \label{Eq:I:24:2} P=F\,\ddt{x}{t}=m\biggl[\biggl(\ddt{x}{t}\biggr)\biggl( \frac{d^2x}{dt^2}\biggr)+\omega_0^2x\biggl(\ddt{x}{t}\biggr) \biggr]+\gamma m\biggl(\ddt{x}{t}\biggr)^2. \end{equation} \begin{align} P=F\,&\ddt{x}{t}\notag\\[1.25ex] =m&\biggl[\biggl(\ddt{x}{t}\biggr)\biggl( \frac{d^2x}{dt^2}\biggr)+\omega_0^2x\biggl(\ddt{x}{t}\biggr)\biggr]\notag\\[.5ex] \label{Eq:I:24:2} &+\gamma m\biggl(\ddt{x}{t}\biggr)^2. \end{align} But the first two terms on the right can also be written as $d/dt[\tfrac{1}{2}m(dx/dt)^2 + \tfrac{1}{2}m\omega_0^2x^2]$, as is immediately verified by differentiating. That is to say, the term in brackets is a pure derivative of two terms that are easy to understand—one is the kinetic energy of motion, and the other is the potential energy of the spring. Let us call this quantity the stored energy, that is, the energy stored in the oscillation. Suppose that we want the average power over many cycles when the oscillator is being forced and has been running for a long time. In the long run, the stored energy does not change—its derivative gives zero average effect. In other words, if we average the power in the long run, all the energy ultimately ends up in the resistive term $\gamma m(dx/dt)^2$. There is some energy stored in the oscillation, but that does not change with time, if we average over many cycles. Therefore the mean power $\avg{P}$ is \begin{equation} \label{Eq:I:24:3} \avg{P} = \avg{\gamma m(dx/dt)^2}. \end{equation} Using our method of writing complex numbers, and our theorem that $\avg{A^2} = \tfrac{1}{2}A_0^2$, we may find this mean power. Thus if $x= \hat{x}e^{i\omega t}$, then $dx/dt = i\omega\hat{x}e^{i\omega t}$. Therefore, in these circumstances, the average power could be written as \begin{equation} \label{Eq:I:24:4} \avg{P} = \tfrac{1}{2}\gamma m\omega^2x_0^2. \end{equation} In the notation for electrical circuits, $dx/dt$ is replaced by the current $I$ ($I$ is $dq/dt$, where $q$ corresponds to $x$), and $m\gamma$ corresponds to the resistance $R$. Thus the rate of the energy loss—the power used up by the forcing function—is the resistance in the circuit times the average square of the current: \begin{equation} \label{Eq:I:24:5} \avg{P} = R\avg{I^2} = R\cdot\tfrac{1}{2}I_0^2. \end{equation} This energy, of course, goes into heating the resistor; it is sometimes called the heating loss or the Joule heating. Another interesting feature to discuss is how much energy is stored. That is not the same as the power, because although power was at first used to store up some energy, after that the system keeps on absorbing power, insofar as there are any heating (resistive) losses. At any moment there is a certain amount of stored energy, so we would like to calculate the mean stored energy $\avg{E}$ also. We have already calculated what the average of $(dx/dt)^2$ is, so we find \begin{equation} \begin{aligned} \avg{E} &= \tfrac{1}{2}m \avg{(dx/dt)^2} + \tfrac{1}{2}m\omega_0^2 \avg{x^2}\\[1ex] &=\tfrac{1}{2}m(\omega^2+\omega_0^2)\tfrac{1}{2}x_0^2. \end{aligned} \label{Eq:I:24:6} \end{equation} Now, when an oscillator is very efficient, and if $\omega$ is near $\omega_0$, so that $\abs{\hat{x}}$ is large, the stored energy is very high—we can get a large stored energy from a relatively small force. The force does a great deal of work in getting the oscillation going, but then to keep it steady, all it has to do is to fight the friction. The oscillator can have a great deal of energy if the friction is very low, and even though it is oscillating strongly, not much energy is being lost. The efficiency of an oscillator can be measured by how much energy is stored, compared with how much work the force does per oscillation. How does the stored energy compare with the amount of work that is done in one cycle? This is called the $Q$ of the system, and $Q$ is defined as $2\pi$ times the mean stored energy, divided by the work done per cycle. (If we were to say the work done per radian instead of per cycle, then the $2\pi$ disappears.) \begin{equation} \label{Eq:I:24:7} Q=2\pi\,\frac{\tfrac{1}{2}m(\omega^2+\omega_0^2)\cdot \avg{x^2}}{\gamma m\omega^2\avg{x^2}\cdot 2\pi/\omega}=\frac{\omega^2+\omega_0^2}{2\gamma\omega}. \end{equation} $Q$ is not a very useful number unless it is very large. When it is relatively large, it gives a measure of how good the oscillator is. People have tried to define $Q$ in the simplest and most useful way; various definitions differ a bit from one another, but if $Q$ is very large, all definitions are in agreement. The most generally accepted definition is Eq. (24.7), which depends on $\omega$. For a good oscillator, close to resonance, we can simplify (24.7) a little by setting $\omega=\omega_0$, and we then have $Q= \omega_0/\gamma$, which is the definition of $Q$ that we used before. What is $Q$ for an electrical circuit? To find out, we merely have to translate $L$ for $m$, $R$ for $m\gamma$, and $1/C$ for $m\omega_0^2$ (see Table 23–1). The $Q$ at resonance is $L\omega/R$, where $\omega$ is the resonance frequency. If we consider a circuit with a high $Q$, that means that the amount of energy stored in the oscillation is very large compared with the amount of work done per cycle by the machinery that drives the oscillations.
1
24
Transients
2
Damped oscillations
We now turn to our main topic of discussion: transients. By a transient is meant a solution of the differential equation when there is no force present, but when the system is not simply at rest. (Of course, if it is standing still at the origin with no force acting, that is a nice problem—it stays there!) Suppose the oscillation starts another way: say it was driven by a force for a while, and then we turn off the force. What happens then? Let us first get a rough idea of what will happen for a very high $Q$ system. So long as a force is acting, the stored energy stays the same, and there is a certain amount of work done to maintain it. Now suppose we turn off the force, and no more work is being done; then the losses which are eating up the energy of the supply are no longer eating up its energy—there is no more driver. The losses will have to consume, so to speak, the energy that is stored. Let us suppose that $Q/2\pi = 1000$. Then the work done per cycle is $1/1000$ of the stored energy. Is it not reasonable, since it is oscillating with no driving force, that in one cycle the system will still lose a thousandth of its energy $E$, which ordinarily would have been supplied from the outside, and that it will continue oscillating, always losing $1/1000$ of its energy per cycle? So, as a guess, for a relatively high $Q$ system, we would suppose that the following equation might be roughly right (we will later do it exactly, and it will turn out that it was right!): \begin{equation} \label{Eq:I:24:8} dE/dt=-\omega E/Q. \end{equation} This is rough because it is true only for large $Q$. In each radian the system loses a fraction $1/Q$ of the stored energy $E$. Thus in a given amount of time $dt$ the energy will change by an amount $\omega\,dt/Q$, since the number of radians associated with the time $dt$ is $\omega\,dt$. What is the frequency? Let us suppose that the system moves so nicely, with hardly any force, that if we let go it will oscillate at essentially the same frequency all by itself. So we will guess that $\omega$ is the resonant frequency $\omega_0$. Then we deduce from Eq. (24.8) that the stored energy will vary as \begin{equation} \label{Eq:I:24:9} E=E_0e^{-\omega_0t/Q}=E_0e^{-\gamma t}. \end{equation} This would be the measure of the energy at any moment. What would the formula be, roughly, for the amplitude of the oscillation as a function of the time? The same? No! The amount of energy in a spring, say, goes as the square of the displacement; the kinetic energy goes as the square of the velocity; so the total energy goes as the square of the displacement. Thus the displacement, the amplitude of oscillation, will decrease half as fast because of the square. In other words, we guess that the solution for the damped transient motion will be an oscillation of frequency close to the resonance frequency $\omega_0$, in which the amplitude of the sine-wave motion will diminish as $e^{-\gamma t/2}$: \begin{equation} \label{Eq:I:24:10} x=A_0e^{-\gamma t/2}\cos\omega_0 t. \end{equation} This equation and Fig. 24–1 give us an idea of what we should expect; now let us try to analyze the motion precisely by solving the differential equation of the motion itself. So, starting with Eq. (24.1), with no outside force, how do we solve it? Being physicists, we do not have to worry about the method as much as we do about what the solution is. Armed with our previous experience, let us try as a solution an exponential curve, $x = Ae^{i\alpha t}$. (Why do we try this? It is the easiest thing to differentiate!) We put this into (24.1) (with $F(t) = 0$), using the rule that each time we differentiate $x$ with respect to time, we multiply by $i\alpha$. So it is really quite simple to substitute. Thus our equation looks like this: \begin{equation} \label{Eq:I:24:11} (-\alpha^2+i\gamma\alpha+\omega_0^2)Ae^{i\alpha t}=0. \end{equation} The net result must be zero for all times, which is impossible unless (a) $A = 0$, which is no solution at all—it stands still, or (b) \begin{equation} \label{Eq:I:24:12} -\alpha^2+i\alpha\gamma+\omega_0^2=0. \end{equation} If we can solve this and find an $\alpha$, then we will have a solution in which $A$ need not be zero! \begin{equation} \label{Eq:I:24:13} \alpha=i\gamma/2\pm\sqrt{\omega_0^2-\gamma^2/4}. \end{equation} For a while we shall assume that $\gamma$ is fairly small compared with $\omega_0$, so that $\omega_0^2-\gamma^2/4$ is definitely positive, and there is nothing the matter with taking the square root. The only bothersome thing is that we get two solutions! Thus \begin{equation} \label{Eq:I:24:14} \alpha_1 =i\gamma/2+\sqrt{\omega_0^2-\gamma^2/4}= i\gamma/2+\omega_\gamma \end{equation} and \begin{equation} \label{Eq:I:24:15} \alpha_2 =i\gamma/2-\sqrt{\omega_0^2-\gamma^2/4}=i\gamma/2-\omega_\gamma. \end{equation} Let us consider the first one, supposing that we had not noticed that the square root has two possible values. Then we know that a solution for $x$ is $x_1= Ae^{i\alpha_1t}$, where $A$ is any constant whatever. Now, in substituting $\alpha_1$, because it is going to come so many times and it takes so long to write, we shall call $\sqrt{\omega_0^2-\gamma^2/4}=\omega_\gamma$. Thus $i\alpha_1=-\gamma/2 + i\omega_\gamma$, and we get $x = Ae^{(-\gamma/2+i\omega_\gamma)t}$, or what is the same, because of the wonderful properties of an exponential, \begin{equation} \label{Eq:I:24:16} x_1=Ae^{-\gamma t/2}e^{i\omega_\gamma t}. \end{equation} First, we recognize this as an oscillation, an oscillation at a frequency $\omega_\gamma$, which is not exactly the frequency $\omega_0$, but is rather close to $\omega_0$ if it is a good system. Second, the amplitude of the oscillation is decreasing exponentially! If we take, for instance, the real part of (24.16), we get \begin{equation} \label{Eq:I:24:17} x_1=Ae^{-\gamma t/2}\cos\omega_\gamma t. \end{equation} This is very much like our guessed-at solution (24.10), except that the frequency really is $\omega_\gamma$. This is the only error, so it is the same thing—we have the right idea. But everything is not all right! What is not all right is that there is another solution. The other solution is $\alpha_2$, and we see that the difference is only that the sign of $\omega_\gamma$ is reversed: \begin{equation} \label{Eq:I:24:18} x_2=Be^{-\gamma t/2}e^{-i\omega_\gamma t}. \end{equation} What does this mean? We shall soon prove that if $x_1$ and $x_2$ are each a possible solution of Eq. (24.1) with $F = 0$, then $x_1 + x_2$ is also a solution of the same equation! So the general solution $x$ is of the mathematical form \begin{equation} \label{Eq:I:24:19} x=e^{-\gamma t/2}(Ae^{i\omega_\gamma t}+Be^{-i\omega_\gamma t}). \end{equation} Now we may wonder why we bother to give this other solution, since we were so happy with the first one all by itself. What is the extra one for, because of course we know we should only take the real part? We know that we must take the real part, but how did the mathematics know that we only wanted the real part? When we had a nonzero driving force $F(t)$, we put in an artificial force to go with it, and the imaginary part of the equation, so to speak, was driven in a definite way. But when we put $F(t) \equiv 0$, our convention that $x$ should be only the real part of whatever we write down is purely our own, and the mathematical equations do not know it yet. The physical world has a real solution, but the answer that we were so happy with before is not real, it is complex. The equation does not know that we are arbitrarily going to take the real part, so it has to present us, so to speak, with a complex conjugate type of solution, so that by putting them together we can make a truly real solution; that is what $\alpha_2$ is doing for us. In order for $x$ to be real, $Be^{-i\omega_\gamma t}$ will have to be the complex conjugate of $Ae^{i\omega_\gamma t}$ that the imaginary parts disappear. So it turns out that $B$ is the complex conjugate of $A$, and our real solution is \begin{equation} \label{Eq:I:24:20} x=e^{-\gamma t/2}(Ae^{i\omega_\gamma t}+A\cconj e^{-i\omega_\gamma t}). \end{equation} So our real solution is an oscillation with a phase shift and a damping—just as advertised.
1
24
Transients
3
Electrical transients
Now let us see if the above really works. We construct the electrical circuit shown in Fig. 24–2, in which we apply to an oscilloscope the voltage across the inductance $L$ after we suddenly turn on a voltage by closing the switch $S$. It is an oscillatory circuit, and it generates a transient of some kind. It corresponds to a circumstance in which we suddenly apply a force and the system starts to oscillate. It is the electrical analog of a damped mechanical oscillator, and we watch the oscillation on an oscilloscope, where we should see the curves that we were trying to analyze. (The horizontal motion of the oscilloscope is driven at a uniform speed, while the vertical motion is the voltage across the inductor. The rest of the circuit is only a technical detail. We would like to repeat the experiment many, many times, since the persistence of vision is not good enough to see only one trace on the screen. So we do the experiment again and again by closing the switch $60$ times a second; each time we close the switch, we also start the oscilloscope horizontal sweep, and it draws the curve over and over.) In Figs. 24–3 to 24–6 we see examples of damped oscillations, actually photographed on an oscilloscope screen. Figure 24–3 shows a damped oscillation in a circuit which has a high $Q$, a small $\gamma$. It does not die out very fast; it oscillates many times on the way down. But let us see what happens as we decrease $Q$, so that the oscillation dies out more rapidly. We can decrease $Q$ by increasing the resistance $R$ in the circuit. When we increase the resistance in the circuit, it dies out faster (Fig. 24–4). Then if we increase the resistance in the circuit still more, it dies out faster still (Fig. 24–5). But when we put in more than a certain amount, we cannot see any oscillation at all! The question is, is this because our eyes are not good enough? If we increase the resistance still more, we get a curve like that of Fig. 24–6, which does not appear to have any oscillations, except perhaps one. Now, how can we explain that by mathematics? The resistance is, of course, proportional to the $\gamma$ term in the mechanical device. Specifically, $\gamma$ is $R/L$. Now if we increase the $\gamma$ in the solutions (24.14) and (24.15) that we were so happy with before, chaos sets in when $\gamma/2$ exceeds $\omega_0$; we must write it a different way, as \begin{equation*} i\gamma/2+i\sqrt{\gamma^2/4-\omega_0^2}\quad \text{and}\quad i\gamma/2-i\sqrt{\gamma^2/4-\omega_0^2}. \end{equation*} Those are now the two solutions and, following the same line of mathematical reasoning as previously, we again find two solutions: $e^{i\alpha_1 t}$ and $e^{i\alpha_2 t}$. If we now substitute for $\alpha_1$, we get \begin{equation*} x=Ae^{-(\gamma/2+\sqrt{\gamma^2/4-\omega_0^2})t}, \end{equation*} a nice exponential decay with no oscillations. Likewise, the other solution is \begin{equation*} x=Be^{-(\gamma/2-\sqrt{\gamma^2/4-\omega_0^2})t}. \end{equation*} Note that the square root cannot exceed $\gamma/2$, because even if $\omega_0=0$, one term just equals the other. But $\omega_0^2$ is taken away from $\gamma^2/4$, so the square root is less than $\gamma/2$, and the term in parentheses is, therefore, always a positive number. Thank goodness! Why? Because if it were negative, we would find $e$ raised to a positive factor times $t$, which would mean it was exploding! In putting more and more resistance into the circuit, we know it is not going to explode—quite the contrary. So now we have two solutions, each one by itself a dying exponential, but one having a much faster “dying rate” than the other. The general solution is of course a combination of the two; the coefficients in the combination depending upon how the motion starts—what the initial conditions of the problem are. In the particular way this circuit happens to be starting, the $A$ is negative and the $B$ is positive, so we get the difference of two exponential curves. Now let us discuss how we can find the two coefficients $A$ and $B$ (or $A$ and $A\cconj$), if we know how the motion was started. Suppose that at $t = 0$ we know that $x = x_0$, and $dx/dt = v_0$. If we put $t= 0$, $x = x_0$, and $dx/dt = v_0$ into the expressions \begin{align*} x&=e^{-\gamma t/2}(Ae^{i\omega_\gamma t}+ A\cconj e^{-i\omega_\gamma t}),\\[1ex] dx/dt&=e^{-\gamma t/2}\bigl[ (-\gamma/2+i\omega_\gamma)Ae^{i\omega_\gamma t}+ (-\gamma/2-i\omega_\gamma)A\cconj e^{-i\omega_\gamma t}\,\bigr], \end{align*} \begin{align*} x=e^{-\gamma t/2}(&Ae^{i\omega_\gamma t}+ A\cconj e^{-i\omega_\gamma t}),\\[1ex] dx/dt=e^{-\gamma t/2}\bigl[ &(-\gamma/2+i\omega_\gamma)Ae^{i\omega_\gamma t}+\\ &(-\gamma/2-i\omega_\gamma)A\cconj e^{-i\omega_\gamma t}\,\bigr], \end{align*} we find, since $e^0 =$ $e^{i0} =$ $1$, \begin{align*} x_0&=A+A\cconj=2A_R,\\[1ex] v_0&=-(\gamma/2)(A+A\cconj)+i\omega_\gamma(A-A\cconj)\\[.5ex] &=-\gamma x_0/2+i\omega_\gamma(2iA_I), \end{align*} where $A = A_R + iA_I$, and $A\cconj = A_R - iA_I$. Thus we find \begin{equation} A_R =x_0/2\notag \end{equation} and \begin{equation} \label{Eq:I:24:21} A_I =-(v_0+\gamma x_0/2)/2\omega_\gamma. \end{equation} This completely determines $A$ and $A\cconj$, and therefore the complete curve of the transient solution, in terms of how it begins. Incidentally, we can write the solution another way if we note that \begin{equation*} e^{i\theta}+e^{-i\theta}=2\cos\theta \quad\text{and}\quad e^{i\theta}-e^{-i\theta}=2i\sin\theta. \end{equation*} We may then write the complete solution as \begin{equation} \label{Eq:I:24:22} x=e^{-\gamma t/2}\biggl[ x_0\cos\omega_\gamma t+ \frac{v_0+\gamma x_0/2}{\omega_\gamma}\sin\omega_\gamma t \biggr], \end{equation} where $\omega_\gamma=+\sqrt{\omega_0^2-\gamma^2/4}$. This is the mathematical expression for the way an oscillation dies out. We shall not make direct use of it, but there are a number of points we should like to emphasize that are true in more general cases. First of all the behavior of such a system with no external force is expressed by a sum, or superposition, of pure exponentials in time (which we wrote as $e^{i\alpha t}$). This is a good solution to try in such circumstances. The values of $\alpha$ may be complex in general, the imaginary parts representing damping. Finally the intimate mathematical relation of the sinusoidal and exponential function discussed in Chapter 22 often appears physically as a change from oscillatory to exponential behavior when some physical parameter (in this case resistance, $\gamma$) exceeds some critical value.
1
25
Linear Systems and Review
1
Linear differential equations
In this chapter we shall discuss certain aspects of oscillating systems that are found somewhat more generally than just in the particular systems we have been discussing. For our particular system, the differential equation that we have been solving is \begin{equation} \label{Eq:I:25:1} m\,\frac{d^2x}{dt^2}+\gamma m\,\ddt{x}{t}+m\omega_0^2x=F(t). \end{equation} Now this particular combination of “operations” on the variable $x$ has the interesting property that if we substitute $(x + y)$ for $x$, then we get the sum of the same operations on $x$ and $y$; or, if we multiply $x$ by $a$, then we get just $a$ times the same combination. This is easy to prove. Just as a “shorthand” notation, because we get tired of writing down all those letters in (25.1), we shall use the symbol $\uL(x)$ instead. When we see this, it means the left-hand side of (25.1), with $x$ substituted in. With this system of writing, $\uL(x + y)$ would mean the following: \begin{equation} \label{Eq:I:25:2} \uL(x+y)=m\,\frac{d^2(x+y)}{dt^2}+\gamma m\,\ddt{(x+y)}{t} +m\omega_0^2(x+y). \end{equation} \begin{align} \uL(x+y)=m\,\frac{d^2(x+y)}{dt^2}&+\gamma m\,\ddt{(x+y)}{t}\notag\\ &+m\omega_0^2(x+y). \label{Eq:I:25:2} \end{align} (We underline the $\uL$ so as to remind ourselves that it is not an ordinary function.) We sometimes call this an operator notation, but it makes no difference what we call it, it is just “shorthand.” Our first statement was that \begin{equation} \label{Eq:I:25:3} \uL(x+y)=\uL(x)+\uL(y), \end{equation} which of course follows from the fact that $a(x + y) = ax + ay$, $d(x + y)/dt = dx/dt + dy/dt$, etc. Our second statement was, for constant $a$, \begin{equation} \label{Eq:I:25:4} \uL(ax)=a\uL(x). \end{equation} [Actually, (25.3) and (25.4) are very closely related, because if we put $x + x$ into (25.3), this is the same as setting $a = 2$ in (25.4), and so on.] In more complicated problems, there may be more derivatives, and more terms in $\uL$; the question of interest is whether the two equations (25.3) and (25.4) are maintained or not. If they are, we call such a problem a linear problem. In this chapter we shall discuss some of the properties that exist because the system is linear, to appreciate the generality of some of the results that we have obtained in our special analysis of a special equation. Now let us study some of the properties of linear differential equations, having illustrated them already with the specific equation (25.1) that we have studied so closely. The first property of interest is this: suppose that we have to solve the differential equation for a transient, the free oscillation with no driving force. That is, we want to solve \begin{equation} \label{Eq:I:25:5} \uL(x)=0. \end{equation} Suppose that, by some hook or crook, we have found a particular solution, which we shall call $x_1$. That is, we have an $x_1$ for which $\uL(x_1) = 0$. Now we notice that $ax_1$ is also a solution to the same equation; we can multiply this special solution by any constant whatever, and get a new solution. In other words, if we had a motion of a certain “size,” then a motion twice as “big” is again a solution. Proof: $\uL(ax_1) =$ $a\uL(x_1) =$ $a\cdot0 = 0$. Next, suppose that, by hook or by crook, we have not only found one solution $x_1$, but also another solution, $x_2$. (Remember that when we substituted $x = e^{i\alpha t}$ for finding the transients, we found two values for $\alpha$, that is, two solutions, $x_1$ and $x_2$.) Now let us show that the combination $(x_1 + x_2)$ is also a solution. In other words, if we put $x = x_1 + x_2$, $x$ is again a solution of the equation. Why? Because, if $\uL(x_1) = 0$ and $\uL(x_2) = 0$, then $\uL(x_1 + x_2) = \uL(x_1) + \uL(x_2) = 0 + 0 = 0$. So if we have found a number of solutions for the motion of a linear system we can add them together. Combining these two ideas, we see, of course, that we can also add six of one and two of the other: if $x_1$ is a solution, so is $\alpha x_1$. Therefore any sum of these two solutions, such as $(\alpha x_1 + \beta x_2)$, is also a solution. If we happen to be able to find three solutions, then we find that any combination of the three solutions is again a solution, and so on. It turns out that the number of what we call independent solutions1 that we have obtained for our oscillator problem is only two. The number of independent solutions that one finds in the general case depends upon what is called the number of degrees of freedom. We shall not discuss this in detail now, but if we have a second-order differential equation, there are only two independent solutions, and we have found both of them; so we have the most general solution. Now let us go on to another proposition, which applies to the situation in which the system is subjected to an outside force. Suppose the equation is \begin{equation} \label{Eq:I:25:6} \uL(x)=F(t), \end{equation} and suppose that we have found a special solution of it. Let us say that Joe’s solution is $x_J$, and that $\uL(x_J) = F(t)$. Suppose we want to find yet another solution; suppose we add to Joe’s solution one of those that was a solution of the free equation (25.5), say $x_1$. Then we see by (25.3) that \begin{equation} \label{Eq:I:25:7} \uL(x_J+x_1)=\uL(x_J)+\uL(x_1)=F(t)+0=F(t). \end{equation} \begin{equation} \begin{aligned} \uL(x_J+x_1)&=\uL(x_J)+\uL(x_1)\\[1ex] &=F(t)+0\\[.75ex] &=F(t). \end{aligned} \label{Eq:I:25:7} \end{equation} Therefore, to the “forced” solution we can add any “free” solution, and we still have a solution. The free solution is called a transient solution. When we have no force acting, and suddenly turn one on, we do not immediately get the steady solution that we solved for with the sine wave solution, but for a while there is a transient which sooner or later dies out, if we wait long enough. The “forced” solution does not die out, since it keeps on being driven by the force. Ultimately, for long periods of time, the solution is unique, but initially the motions are different for different circumstances, depending on how the system was started.
1
25
Linear Systems and Review
2
Superposition of solutions
Now we come to another interesting proposition. Suppose that we have a certain particular driving force $F_a$ (let us say an oscillatory one with a certain $\omega = \omega_a$, but our conclusions will be true for any functional form of $F_a$) and we have solved for the forced motion (with or without the transients; it makes no difference). Now suppose some other force is acting, let us say $F_b$, and we solve the same problem, but for this different force. Then suppose someone comes along and says, “I have a new problem for you to solve; I have the force $F_a + F_b$.” Can we do it? Of course we can do it, because the solution is the sum of the two solutions $x_a$ and $x_b$ for the forces taken separately—a most remarkable circumstance indeed. If we use (25.3), we see that \begin{equation} \label{Eq:I:25:8} \uL(x_a+x_b)=\uL(x_a)+\uL(x_b)=F_a(t)+F_b(t). \end{equation} \begin{equation} \begin{aligned} \uL(x_a+x_b)&=\uL(x_a)+\uL(x_b)\\[1ex] &=F_a(t)+F_b(t). \end{aligned} \label{Eq:I:25:8} \end{equation} This is an example of what is called the principle of superposition for linear systems, and it is very important. It means the following: if we have a complicated force which can be broken up in any convenient manner into a sum of separate pieces, each of which is in some way simple, in the sense that for each special piece into which we have divided the force we can solve the equation, then the answer is available for the whole force, because we may simply add the pieces of the solution back together, in the same manner as the total force is compounded out of pieces (Fig. 25–1). Let us give another example of the principle of superposition. In Chapter 12 we said that it was one of the great facts of the laws of electricity that if we have a certain distribution of charges $q_a$ and calculate the electric field $\FLPE_a$ arising from these charges at a certain place $P$, and if, on the other hand, we have another set of charges $q_b$ and we calculate the field $\FLPE_b$ due to these at the corresponding place, then if both charge distributions are present at the same time, the field $\FLPE$ at $P$ is the sum of $\FLPE_a$ due to one set plus $\FLPE_b$ due to the other. In other words, if we know the field due to a certain charge, then the field due to many charges is merely the vector sum of the fields of these charges taken individually. This is exactly analogous to the above proposition that if we know the result of two given forces taken at one time, then if the force is considered as a sum of them, the response is a sum of the corresponding individual responses. The reason why this is true in electricity is that the great laws of electricity, Maxwell’s equations, which determine the electric field, turn out to be differential equations which are linear, i.e., which have the property (25.3). What corresponds to the force is the charge generating the electric field, and the equation which determines the electric field in terms of the charge is linear. As another interesting example of this proposition, let us ask how it is possible to “tune in” to a particular radio station at the same time as all the radio stations are broadcasting. The radio station transmits, fundamentally, an oscillating electric field of very high frequency which acts on our radio antenna. It is true that the amplitude of the oscillation of the field is changed, modulated, to carry the signal of the voice, but that is very slow, and we are not going to worry about it. When one hears “This station is broadcasting at a frequency of $780$ kilocycles,” this indicates that $780{,}000$ oscillations per second is the frequency of the electric field of the station antenna, and this drives the electrons up and down at that frequency in our antenna. Now at the same time we may have another radio station in the same town radiating at a different frequency, say $550$ kilocycles per second; then the electrons in our antenna are also being driven by that frequency. Now the question is, how is it that we can separate the signals coming into the one radio at $780$ kilocycles from those coming in at $550$ kilocycles? We certainly do not hear both stations at the same time. By the principle of superposition, the response of the electric circuit in the radio, the first part of which is a linear circuit, to the forces that are acting due to the electric field $F_a + F_b$, is $x_a + x_b$. It therefore looks as though we will never disentangle them. In fact, the very proposition of superposition seems to insist that we cannot avoid having both of them in our system. But remember, for a resonant circuit, the response curve, the amount of $x$ per unit $F$, as a function of the frequency, looks like Fig. 25–3. If it were a very high $Q$ circuit, the response would show a very sharp maximum. Now suppose that the two stations are comparable in strength, that is, the two forces are of the same magnitude. The response that we get is the sum of $x_a$ and $x_b$. But, in Fig. 25–3, $x_a$ is tremendous, while $x_b$ is small. So, in spite of the fact that the two signals are equal in strength, when they go through the sharp resonant circuit of the radio tuned for $\omega_a$, the frequency of the transmission of one station, then the response to this station is much greater than to the other. Therefore the complete response, with both signals acting, is almost all made up of $\omega_a$, and we have selected the station we want. Now what about the tuning? How do we tune it? We change $\omega_0$ by changing the $L$ or the $C$ of the circuit, because the frequency of the circuit has to do with the combination of $L$ and $C$. In particular, most radios are built so that one can change the capacitance. When we retune the radio, we can make a new setting of the dial, so that the natural frequency of the circuit is shifted, say, to $\omega_c$. In those circumstances we hear neither one station nor the other; we get silence, provided there is no other station at frequency $\omega_c$. If we keep on changing the capacitance until the resonance curve is at $\omega_b$, then of course we hear the other station. That is how radio tuning works; it is again the principle of superposition, combined with a resonant response.2 To conclude this discussion, let us describe qualitatively what happens if we proceed further in analyzing a linear problem with a given force, when the force is quite complicated. Out of the many possible procedures, there are two especially useful general ways that we can solve the problem. One is this: suppose that we can solve it for special known forces, such as sine waves of different frequencies. We know it is child’s play to solve it for sine waves. So we have the so-called “child’s play” cases. Now the question is whether our very complicated force can be represented as the sum of two or more “child’s play” forces. In Fig. 25–1 we already had a fairly complicated curve, and of course we can make it more complicated still if we add in more sine waves. So it is certainly possible to obtain very complicated curves. And, in fact, the reverse is also true: practically every curve can be obtained by adding together infinite numbers of sine waves of different wavelengths (or frequencies) for each one of which we know the answer. We just have to know how much of each sine wave to put in to make the given $F$, and then our answer, $x$, is the corresponding sum of the $F$ sine waves, each multiplied by its effective ratio of $x$ to $F$. This method of solution is called the method of Fourier transforms or Fourier analysis. We are not going to actually carry out such an analysis just now; we only wish to describe the idea involved. Another way in which our complicated problem can be solved is the following very interesting one. Suppose that, by some tremendous mental effort, it were possible to solve our problem for a special force, namely an impulse. The force is quickly turned on and then off; it is all over. Actually we need only solve for an impulse of some unit strength, any other strength can be gotten by multiplication by an appropriate factor. We know that the response $x$ for an impulse is a damped oscillation. Now what can we say about some other force, for instance a force like that of Fig. 25–4? Such a force can be likened to a succession of blows with a hammer. First there is no force, and all of a sudden there is a steady force—impulse, impulse, impulse, impulse, … and then it stops. In other words, we imagine the continuous force to be a series of impulses, very close together. Now, we know the result for an impulse, so the result for a whole series of impulses will be a whole series of damped oscillations: it will be the curve for the first impulse, and then (slightly later) we add to that the curve for the second impulse, and the curve for the third impulse, and so on. Thus we can represent, mathematically, the complete solution for arbitrary functions if we know the answer for an impulse. We get the answer for any other force simply by integrating. This method is called the Green’s function method. A Green’s function is a response to an impulse, and the method of analyzing any force by putting together the response of impulses is called the Green’s function method. The physical principles involved in both of these schemes are so simple, involving just the linear equation, that they can be readily understood, but the mathematical problems that are involved, the complicated integrations and so on, are a little too advanced for us to attack right now. You will most likely return to this some day when you have had more practice in mathematics. But the idea is very simple indeed. Finally, we make some remarks on why linear systems are so important. The answer is simple: because we can solve them! So most of the time we solve linear problems. Second (and most important), it turns out that the fundamental laws of physics are often linear. The Maxwell equations for the laws of electricity are linear, for example. The great laws of quantum mechanics turn out, so far as we know, to be linear equations. That is why we spend so much time on linear equations: because if we understand linear equations, we are ready, in principle, to understand a lot of things. We mention another situation where linear equations are found. When displacements are small, many functions can be approximated linearly. For example, if we have a simple pendulum, the correct equation for its motion is \begin{equation} \label{Eq:I:25:9} d^2\theta/dt^2=-(g/L)\sin\theta. \end{equation} This equation can be solved by elliptic functions, but the easiest way to solve it is numerically, as was shown in Chapter 9 on Newton’s Laws of Motion. A nonlinear equation cannot be solved, ordinarily, any other way but numerically. Now for small $\theta$, $\sin\theta$ is practically equal to $\theta$, and we have a linear equation. It turns out that there are many circumstances where small effects are linear: for the example here the swing of a pendulum through small arcs. As another example, if we pull a little bit on a spring, the force is proportional to the extension. If we pull hard, we break the spring, and the force is a completely different function of the distance! Linear equations are important. In fact they are so important that perhaps fifty percent of the time we are solving linear equations in physics and in engineering.
1
25
Linear Systems and Review
3
Oscillations in linear systems
Let us now review the things we have been talking about in the past few chapters. It is very easy for the physics of oscillators to become obscured by the mathematics. The physics is actually very simple, and if we may forget the mathematics for a moment we shall see that we can understand almost everything that happens in an oscillating system. First, if we have only the spring and the weight, it is easy to understand why the system oscillates—it is a consequence of inertia. We pull the mass down and the force pulls it back up; as it passes zero, which is the place it likes to be, it cannot just suddenly stop; because of its momentum it keeps on going and swings to the other side, and back and forth. So, if there were no friction, we would surely expect an oscillatory motion, and indeed we get one. But if there is even a little bit of friction, then on the return cycle, the swing will not be quite as high as it was the first time. Now what happens, cycle by cycle? That depends on the kind and amount of friction. Suppose that we could concoct a kind of friction force that always remains in the same proportion to the other forces, of inertia and in the spring, as the amplitude of oscillation varies. In other words, for smaller oscillations the friction should be weaker than for big oscillations. Ordinary friction does not have this property, so a special kind of friction must be carefully invented for the very purpose of creating a friction that is directly proportional to the velocity—so that for big oscillations it is stronger and for small oscillations it is weaker. If we happen to have that kind of friction, then at the end of each successive cycle the system is in the same condition as it was at the start, except a little bit smaller. All the forces are smaller in the same proportion: the spring force is reduced, the inertial effects are lower because the accelerations are now weaker, and the friction is less too, by our careful design. When we actually have that kind of friction, we find that each oscillation is exactly the same as the first one, except reduced in amplitude. If the first cycle dropped the amplitude, say, to $90$ percent of what it was at the start, the next will drop it to $90$ percent of $90$ percent, and so on: the sizes of the oscillations are reduced by the same fraction of themselves in every cycle. An exponential function is a curve which does just that. It changes by the same factor in each equal interval of time. That is to say, if the amplitude of one cycle, relative to the preceding one, is called $a$, then the amplitude of the next is $a^2$, and of the next, $a^3$. So the amplitude is some constant raised to a power equal to the number of cycles traversed: \begin{equation} \label{Eq:I:25:10} A=A_0a^n. \end{equation} But of course $n\propto t$, so it is perfectly clear that the general solution will be some kind of an oscillation, sine or cosine $\omega t$, times an amplitude which goes as $b^t$ more or less. But $b$ can be written as $e^{-c}$, if $b$ is positive and less than $1$. So this is why the solution looks like $e^{-ct}\cos\omega_0 t$. It is very simple. What happens if the friction is not so artificial; for example, ordinary rubbing on a table, so that the friction force is a certain constant amount, and is independent of the size of the oscillation that reverses its direction each half-cycle? Then the equation is no longer linear, it becomes hard to solve, and must be solved by the numerical method given in Chapter 9, or by considering each half-cycle separately. The numerical method is the most powerful method of all, and can solve any equation. It is only when we have a simple problem that we can use mathematical analysis. Mathematical analysis is not the grand thing it is said to be; it solves only the simplest possible equations. As soon as the equations get a little more complicated, just a shade—they cannot be solved analytically. But the numerical method, which was advertised at the beginning of the course, can take care of any equation of physical interest. Next, what about the resonance curve? Why is there a resonance? First, imagine for a moment that there is no friction, and we have something which could oscillate by itself. If we tapped the pendulum just right each time it went by, of course we could make it go like mad. But if we close our eyes and do not watch it, and tap at arbitrary equal intervals, what is going to happen? Sometimes we will find ourselves tapping when it is going the wrong way. When we happen to have the timing just right, of course, each tap is given at just the right time, and so it goes higher and higher and higher. So without friction we get a curve which looks like the solid curve in Fig. 25–5 for different frequencies. Qualitatively, we understand the resonance curve; in order to get the exact shape of the curve it is probably just as well to do the mathematics. The curve goes toward infinity as $\omega\to\omega_0$, where $\omega_0$ is the natural frequency of the oscillator. Now suppose there is a little bit of friction; then when the displacement of the oscillator is small, the friction does not affect it much; the resonance curve is the same, except when we are near resonance. Instead of becoming infinite near resonance, the curve is only going to get so high that the work done by our tapping each time is enough to compensate for the loss of energy by friction during the cycle. So the top of the curve is rounded off—it does not go to infinity. If there is more friction, the top of the curve is rounded off still more. Now someone might say, “I thought the widths of the curves depended on the friction.” That is because the curve is usually plotted so that the top of the curve is called one unit. However, the mathematical expression is even simpler to understand if we just plot all the curves on the same scale; then all that happens is that the friction cuts down the top! If there is less friction, we can go farther up into that little pinnacle before the friction cuts it off, so it looks relatively narrow. That is, the higher the peak of the curve, the narrower the width at half the maximum height. Finally, we take the case where there is an enormous amount of friction. It turns out that if there is too much friction, the system does not oscillate at all. The energy in the spring is barely able to move it against the frictional force, and so it slowly oozes down to the equilibrium point.
1
25
Linear Systems and Review
4
Analogs in physics
The next aspect of this review is to note that masses and springs are not the only linear systems; there are others. In particular, there are electrical systems called linear circuits, in which we find a complete analog to mechanical systems. We did not learn exactly why each of the objects in an electrical circuit works in the way it does—that is not to be understood at the present moment; we may assert it as an experimentally verifiable fact that they behave as stated. For example, let us take the simplest possible circumstance. We have a piece of wire, which is just a resistance, and we have applied to it a difference in potential, $V$. Now the $V$ means this: if we carry a charge $q$ through the wire from one terminal to another terminal, the work done is $qV$. The higher the voltage difference, the more work was done when the charge, as we say, “falls” from the high potential end of the terminal to the low potential end. So charges release energy in going from one end to the other. Now the charges do not simply fly from one end straight to the other end; the atoms in the wire offer some resistance to the current, and this resistance obeys the following law for almost all ordinary substances: if there is a current $I$, that is, so and so many charges per second tumbling down, the number per second that comes tumbling through the wire is proportional to how hard we push them—in other words, proportional to how much voltage there is: \begin{equation} \label{Eq:I:25:11} V=IR=R(dq/dt). \end{equation} The coefficient $R$ is called the resistance, and the equation is called Ohm’s Law. The unit of resistance is the ohm; it is equal to one volt per ampere. In mechanical situations, to get such a frictional force in proportion to the velocity is difficult; in an electrical system it is very easy, and this law is extremely accurate for most metals. We are often interested in how much work is done per second, the power loss, or the energy liberated by the charges as they tumble down the wire. When we carry a charge $q$ through a voltage $V$, the work is $qV$, so the work done per second would be $V(dq/dt)$, which is the same as $VI$, or also $IR\cdot I= I^2 R$. This is called the heating loss—this is how much heat is generated in the resistance per second, by the conservation of energy. It is this heat that makes an ordinary incandescent light bulb work. Of course, there are other interesting properties of mechanical systems, such as the mass (inertia), and it turns out that there is an electrical analog to inertia also. It is possible to make something called an inductor, having a property called inductance, such that a current, once started through the inductance, does not want to stop. It requires a voltage in order to change the current! If the current is constant, there is no voltage across an inductance. dc circuits do not know anything about inductance; it is only when we change the current that the effects of inductance show up. The equation is \begin{equation} \label{Eq:I:25:12} V=L(dI/dt)=L(d^2q/dt^2), \end{equation} and the unit of inductance, called the henry, is such that one volt applied to an inductance of one henry produces a change of one ampere per second in the current. Equation (25.12) is the analog of Newton’s law for electricity, if we wish: $V$ corresponds to $F$, $L$ corresponds to $m$, and $I$ corresponds to velocity! All of the consequent equations for the two kinds of systems will have the same derivations because, in all the equations, we can change any letter to its corresponding analog letter and we get the same equation; everything we deduce will have a correspondence in the two systems. Now what electrical thing corresponds to the mechanical spring, in which there was a force proportional to the stretch? If we start with $F= kx$ and replace $F\to V$ and $x\to q$, we get $V = \alpha q$. It turns out that there is such a thing, in fact it is the only one of the three circuit elements we can really understand, because we did study a pair of parallel plates, and we found that if there were a charge of certain equal, opposite amounts on each plate, the electric field between them would be proportional to the size of the charge. So the work done in moving a unit charge across the gap from one plate to the other is precisely proportional to the charge. This work is the definition of the voltage difference, and it is the line integral of the electric field from one plate to another. It turns out, for historical reasons, that the constant of proportionality is not called $C$, but $1/C$. It could have been called $C$, but it was not. So we have \begin{equation} \label{Eq:I:25:13} V=q/C. \end{equation} The unit of capacitance, $C$, is the farad; a charge of one coulomb on each plate of a one-farad capacitor yields a voltage difference of one volt. There are our analogies, and the equation corresponding to the oscillating circuit becomes the following, by direct substitution of $L$ for $m$, $q$ for $x$, etc: \begin{alignat}{2} \label{Eq:I:25:14} m(d^2x/dt^2)&\,+\,\gamma m(dx/dt)+kx&&=F,\\[1.5ex] \label{Eq:I:25:15} L(d^2q/dt^2)&\,+\,R(dq/dt)+q/C&&=V. \end{alignat} Now everything we learned about (25.14) can be transformed to apply to (25.15). Every consequence is the same; so much the same that there is a brilliant thing we can do. Suppose we have a mechanical system which is quite complicated, not just one mass on a spring, but several masses on several springs, all hooked together. What do we do? Solve it? Perhaps; but look, we can make an electrical circuit which will have the same equations as the thing we are trying to analyze! For instance, if we wanted to analyze a mass on a spring, why can we not build an electrical circuit in which we use an inductance proportional to the mass, a resistance proportional to the corresponding $m\gamma$, $1/C$ proportional to $k$, all in the same ratio? Then, of course, this electrical circuit will be the exact analog of our mechanical one, in the sense that whatever $q$ does, in response to $V$ ($V$ also is made to correspond to the forces that are acting), so the $x$ would do in response to the force! So if we have a complicated thing with a whole lot of interconnecting elements, we can interconnect a whole lot of resistances, inductances, and capacitances, to imitate the mechanically complicated system. What is the advantage to that? One problem is just as hard (or as easy) as the other, because they are exactly equivalent. The advantage is not that it is any easier to solve the mathematical equations after we discover that we have an electrical circuit (although that is the method used by electrical engineers!), but instead, the real reason for looking at the analog is that it is easier to make the electrical circuit, and to change something in the system. Suppose we have designed an automobile, and want to know how much it is going to shake when it goes over a certain kind of bumpy road. We build an electrical circuit with inductances to represent the inertia of the wheels, spring constants as capacitances to represent the springs of the wheels, and resistors to represent the shock absorbers, and so on for the other parts of the automobile. Then we need a bumpy road. All right, we apply a voltage from a generator, which represents such and such a kind of bump, and then look at how the left wheel jiggles by measuring the charge on some capacitor. Having measured it (it is easy to do), we find that it is bumping too much. Do we need more shock absorber, or less shock absorber? With a complicated thing like an automobile, do we actually change the shock absorber, and solve it all over again? No!, we simply turn a dial; dial number ten is shock absorber number three, so we put in more shock absorber. The bumps are worse—all right, we try less. The bumps are still worse; we change the stiffness of the spring (dial $17$), and we adjust all these things electrically, with merely the turn of a knob. This is called an analog computer. It is a device which imitates the problem that we want to solve by making another problem, which has the same equation, but in another circumstance of nature, and which is easier to build, to measure, to adjust, and to destroy!
1
25
Linear Systems and Review
5
Series and parallel impedances
Finally, there is an important item which is not quite in the nature of review. This has to do with an electrical circuit in which there is more than one circuit element. For example, when we have an inductor, a resistor, and a capacitor connected as in Fig. 24–2, we note that all the charge went through every one of the three, so that the current in such a singly connected thing is the same at all points along the wire. Since the current is the same in each one, the voltage across $R$ is $IR$, the voltage across $L$ is $L(dI/dt)$, and so on. So, the total voltage drop is the sum of these, and this leads to Eq. (25.15). Using complex numbers, we found that we could solve the equation for the steady-state motion in response to a sinusoidal force. We thus found that $\hat{V}= \hat{Z}\hat{I}$. Now $\hat{Z}$ is called the impedance of this particular circuit. It tells us that if we apply a sinusoidal voltage, $\hat{V}$, we get a current $\hat{I}$. Now suppose we have a more complicated circuit which has two pieces, which by themselves have certain impedances, $\hat{Z}_1$ and $\hat{Z}_2$ and we put them in series (Fig. 25–6a) and apply a voltage. What happens? It is now a little more complicated, but if $\hat{I}$ is the current through $\hat{Z}_1$, the voltage difference across $\hat{Z}_1$, is $\hat{V}_1=\hat{I}\hat{Z}_1$; similarly, the voltage across $\hat{Z}_2$ is $\hat{V}_2=\hat{I}\hat{Z}_2$. The same current goes through both. Therefore the total voltage is the sum of the voltages across the two sections and is equal to $\hat{V}= \hat{V}_1 + \hat{V}_2 =(\hat{Z}_1 + \hat{Z}_2)\hat{I}$. This means that the voltage on the complete circuit can be written $\hat{V}=\hat{I}\hat{Z}_s$, where the $\hat{Z}_s$ of the combined system in series is the sum of the two $\hat{Z}$’s of the separate pieces: \begin{equation} \label{Eq:I:25:16} \hat{Z}_s=\hat{Z}_1 + \hat{Z}_2. \end{equation} This is not the only way things may be connected. We may also connect them in another way, called a parallel connection (Fig. 25–6b). Now we see that a given voltage across the terminals, if the connecting wires are perfect conductors, is effectively applied to both of the impedances, and will cause currents in each independently. Therefore the current through $\hat{Z}_1$ is equal to $\hat{I}_1 = \hat{V}/\hat{Z}_1$. The current in $\hat{Z}_2$ is $\hat{I}_2 = \hat{V}/\hat{Z}_2$. It is the same voltage. Now the total current which is supplied to the terminals is the sum of the currents in the two sections: $\hat{I}= \hat{V}/\hat{Z}_1 + \hat{V}/\hat{Z}_2$. This can be written as \begin{equation} \hat{V}=\frac{\hat{I}}{(1/\hat{Z}_1)+(1/\hat{Z}_2)}= \hat{I}\hat{Z}_p.\notag \end{equation} Thus \begin{equation} \label{Eq:I:25:17} 1/\hat{Z}_p=1/\hat{Z}_1 + 1/\hat{Z}_2. \end{equation} More complicated circuits can sometimes be simplified by taking pieces of them, working out the succession of impedances of the pieces, and combining the circuit together step by step, using the above rules. If we have any kind of circuit with many impedances connected in all kinds of ways, and if we include the voltages in the form of little generators having no impedance (when we pass charge through it, the generator adds a voltage $V$), then the following principles apply: (1) At any junction, the sum of the currents into a junction is zero. That is, all the current which comes in must come back out. (2) If we carry a charge around any loop, and back to where it started, the net work done is zero. These rules are called Kirchhoff’s laws for electrical circuits. Their systematic application to complicated circuits often simplifies the analysis of such circuits. We mention them here in conjunction with Eqs. (25.16) and (25.17), in case you have already come across such circuits that you need to analyze in laboratory work. They will be discussed again in more detail next year.
1
26
Optics: The Principle of Least Time
1
Light
This is the first of a number of chapters on the subject of electromagnetic radiation. Light, with which we see, is only one small part of a vast spectrum of the same kind of thing, the various parts of this spectrum being distinguished by different values of a certain quantity which varies. This variable quantity could be called the “wavelength.” As it varies in the visible spectrum, the light apparently changes color from red to violet. If we explore the spectrum systematically, from long wavelengths toward shorter ones, we would begin with what are usually called radiowaves. Radiowaves are technically available in a wide range of wavelengths, some even longer than those used in regular broadcasts; regular broadcasts have wavelengths corresponding to about $500$ meters. Then there are the so-called “short waves,” i.e., radar waves, millimeter waves, and so on. There are no actual boundaries between one range of wavelengths and another, because nature did not present us with sharp edges. The number associated with a given name for the waves are only approximate and, of course, so are the names we give to the different ranges. Then, a long way down through the millimeter waves, we come to what we call the infrared, and thence to the visible spectrum. Then going in the other direction, we get into a region which is called the ultraviolet. Where the ultraviolet stops, the x-rays begin, but we cannot define precisely where this is; it is roughly at $10^{-8}$ m, or $10^{-2}$ $\mu$m. These are “soft” x-rays; then there are ordinary x-rays and very hard x-rays; then $\gamma$-rays, and so on, for smaller and smaller values of this dimension called the wavelength. Within this vast range of wavelengths, there are three or more regions of approximation which are especially interesting. In one of these, a condition exists in which the wavelengths involved are very small compared with the dimensions of the equipment available for their study; furthermore, the photon energies, using the quantum theory, are small compared with the energy sensitivity of the equipment. Under these conditions we can make a rough first approximation by a method called geometrical optics. If, on the other hand, the wavelengths are comparable to the dimensions of the equipment, which is difficult to arrange with visible light but easier with radiowaves, and if the photon energies are still negligibly small, then a very useful approximation can be made by studying the behavior of the waves, still disregarding the quantum mechanics. This method is based on the classical theory of electromagnetic radiation, which will be discussed in a later chapter. Next, if we go to very short wavelengths, where we can disregard the wave character but the photons have a very large energy compared with the sensitivity of our equipment, things get simple again. This is the simple photon picture, which we will describe only very roughly. The complete picture, which unifies the whole thing into one model, will not be available to us for a long time. In this chapter our discussion is limited to the geometrical optics region, in which we forget about the wavelength and the photon character of the light, which will all be explained in due time. We do not even bother to say what the light is, but just find out how it behaves on a large scale compared with the dimensions of interest. All this must be said in order to emphasize the fact that what we are going to talk about is only a very crude approximation; this is one of the chapters that we shall have to “unlearn” again. But we shall very quickly unlearn it, because we shall almost immediately go on to a more accurate method. Although geometrical optics is just an approximation, it is of very great importance technically and of great interest historically. We shall present this subject more historically than some of the others in order to give some idea of the development of a physical theory or physical idea. First, light is, of course, familiar to everybody, and has been familiar since time immemorial. Now one problem is, by what process do we see light? There have been many theories, but it finally settled down to one, which is that there is something which enters the eye—which bounces off objects into the eye. We have heard that idea so long that we accept it, and it is almost impossible for us to realize that very intelligent men have proposed contrary theories—that something comes out of the eye and feels for the object, for example. Some other important observations are that, as light goes from one place to another, it goes in straight lines, if there is nothing in the way, and that the rays do not seem to interfere with one another. That is, light is crisscrossing in all directions in the room, but the light that is passing across our line of vision does not affect the light that comes to us from some object. This was once a most powerful argument against the corpuscular theory; it was used by Huygens. If light were like a lot of arrows shooting along, how could other arrows go through them so easily? Such philosophical arguments are not of much weight. One could always say that light is made up of arrows which go through each other!
1
26
Optics: The Principle of Least Time
2
Reflection and refraction
The discussion above gives enough of the basic idea of geometrical optics—now we have to go a little further into the quantitative features. Thus far we have light going only in straight lines between two points; now let us study the behavior of light when it hits various materials. The simplest object is a mirror, and the law for a mirror is that when the light hits the mirror, it does not continue in a straight line, but bounces off the mirror into a new straight line, which changes when we change the inclination of the mirror. The question for the ancients was, what is the relation between the two angles involved? This is a very simple relation, discovered long ago. The light striking a mirror travels in such a way that the two angles, between each beam and the mirror, are equal. For some reason it is customary to measure the angles from the normal to the mirror surface. Thus the so-called law of reflection is \begin{equation} \label{Eq:I:26:1} \theta_i=\theta_r. \end{equation} That is a simple enough proposition, but a more difficult problem is encountered when light goes from one medium into another, for example from air into water; here also, we see that it does not go in a straight line. In the water the ray is at an inclination to its path in the air; if we change the angle $\theta_i$ so that it comes down more nearly vertically, then the angle of “breakage” is not as great. But if we tilt the beam of light at quite an angle, then the deviation angle is very large. The question is, what is the relation of one angle to the other? This also puzzled the ancients for a long time, and here they never found the answer! It is, however, one of the few places in all of Greek physics that one may find any experimental results listed. Claudius Ptolemy made a list of the angle in water for each of a number of different angles in air. Table 26–1 shows the angles in the air, in degrees, and the corresponding angle as measured in the water. (Ordinarily it is said that Greek scientists never did any experiments. But it would be impossible to obtain this table of values without knowing the right law, except by experiment. It should be noted, however, that these do not represent independent careful measurements for each angle but only some numbers interpolated from a few measurements, for they all fit perfectly on a parabola.) This, then, is one of the important steps in the development of physical law: first we observe an effect, then we measure it and list it in a table; then we try to find the rule by which one thing can be connected with another. The above numerical table was made in 140 a.d., but it was not until 1621 that someone finally found the rule connecting the two angles! The rule, found by Willebrord Snell, a Dutch mathematician, is as follows: if $\theta_i$ is the angle in air and $\theta_r$ is the angle in the water, then it turns out that the sine of $\theta_i$ is equal to some constant multiple of the sine of $\theta_r$: \begin{equation} \label{Eq:I:26:2} \sin\theta_i=n\sin\theta_r. \end{equation} For water the number $n$ is approximately $1.33$. Equation (26.2) is called Snell’s law; it permits us to predict how the light is going to bend when it goes from air into water. Table 26–2 shows the angles in air and in water according to Snell’s law. Note the remarkable agreement with Ptolemy’s list.
1
26
Optics: The Principle of Least Time
3
Fermat’s principle of least time
Now in the further development of science, we want more than just a formula. First we have an observation, then we have numbers that we measure, then we have a law which summarizes all the numbers. But the real glory of science is that we can find a way of thinking such that the law is evident. The first way of thinking that made the law about the behavior of light evident was discovered by Fermat in about 1650, and it is called the principle of least time, or Fermat’s principle. His idea is this: that out of all possible paths that it might take to get from one point to another, light takes the path which requires the shortest time. Let us first show that this is true for the case of the mirror, that this simple principle contains both the law of straight-line propagation and the law for the mirror. So, we are growing in our understanding! Let us try to find the solution to the following problem. In Fig. 26–3 are shown two points, $A$ and $B$, and a plane mirror, $MM'$. What is the way to get from $A$ to $B$ in the shortest time? The answer is to go straight from $A$ to $B$! But if we add the extra rule that the light has to strike the mirror and come back in the shortest time, the answer is not so easy. One way would be to go as quickly as possible to the mirror and then go to $B$, on the path $ADB$. Of course, we then have a long path $DB$. If we move over a little to the right, to $E$, we slightly increase the first distance, but we greatly decrease the second one, and so the total path length, and therefore the travel time, is less. How can we find the point $C$ for which the time is the shortest? We can find it very nicely by a geometrical trick. We construct on the other side of $MM'$ an artificial point $B'$, which is the same distance below the plane $MM'$ as the point $B$ is above the plane. Then we draw the line $EB'$. Now because $BFM$ is a right angle and $BF = FB'$, $EB$ is equal to $EB'$. Therefore the sum of the two distances, $AE + EB$, which is proportional to the time it will take if the light travels with constant velocity, is also the sum of the two lengths $AE + EB'$. Therefore the problem becomes, when is the sum of these two lengths the least? The answer is easy: when the line goes through point $C$ as a straight line from $A$ to $B'$! In other words, we have to find the point where we go toward the artificial point, and that will be the correct one. Now if $ACB'$ is a straight line, then angle $BCF$ is equal to angle $B'CF$ and thence to angle $ACM$. Thus the statement that the angle of incidence equals the angle of reflection is equivalent to the statement that the light goes to the mirror in such a way that it comes back to the point $B$ in the least possible time. Originally, the statement was made by Hero of Alexandria that the light travels in such a way that it goes to the mirror and to the other point in the shortest possible distance, so it is not a modern theory. It was this that inspired Fermat to suggest to himself that perhaps refraction operated on a similar basis. But for refraction, light obviously does not use the path of shortest distance, so Fermat tried the idea that it takes the shortest time. Before we go on to analyze refraction, we should make one more remark about the mirror. If we have a source of light at the point $B$ and it sends light toward the mirror, then we see that the light which goes to $A$ from the point $B$ comes to $A$ in exactly the same manner as it would have come to $A$ if there were an object at $B'$, and no mirror. Now of course the eye detects only the light which enters it physically, so if we have an object at $B$ and a mirror which makes the light come into the eye in exactly the same manner as it would have come into the eye if the object were at $B'$, then the eye-brain system interprets that, assuming it does not know too much, as being an object at $B'$. So the illusion that there is an object behind the mirror is merely due to the fact that the light which is entering the eye is entering in exactly the same manner, physically, as it would have entered had there been an object back there (except for the dirt on the mirror, and our knowledge of the existence of the mirror, and so on, which is corrected in the brain). Now let us demonstrate that the principle of least time will give Snell’s law of refraction. We must, however, make an assumption about the speed of light in water. We shall assume that the speed of light in water is lower than the speed of light in air by a certain factor, $n$. In Fig. 26–4, our problem is again to go from $A$ to $B$ in the shortest time. To illustrate that the best thing to do is not just to go in a straight line, let us imagine that a beautiful girl has fallen out of a boat, and she is screaming for help in the water at point $B$. The line marked $x$ is the shoreline. We are at point $A$ on land, and we see the accident, and we can run and can also swim. But we can run faster than we can swim. What do we do? Do we go in a straight line? (Yes, no doubt!) However, by using a little more intelligence we would realize that it would be advantageous to travel a little greater distance on land in order to decrease the distance in the water, because we go so much slower in the water. (Following this line of reasoning out, we would say the right thing to do is to compute very carefully what should be done!) At any rate, let us try to show that the final solution to the problem is the path $ACB$, and that this path takes the shortest time of all possible ones. If it is the shortest path, that means that if we take any other, it will be longer. So, if we were to plot the time it takes against the position of point $X$, we would get a curve something like that shown in Fig. 26–5, where point $C$ corresponds to the shortest of all possible times. This means that if we move the point $X$ to points near $C$, in the first approximation there is essentially no change in time because the slope is zero at the bottom of the curve. So our way of finding the law will be to consider that we move the place by a very small amount, and to demand that there be essentially no change in time. (Of course there is an infinitesimal change of a second order; we ought to have a positive increase for displacements in either direction from $C$.) So we consider a nearby point $X$ and we calculate how long it would take to go from $A$ to $B$ by the two paths, and compare the new path with the old path. It is very easy to do. We want the difference, of course, to be nearly zero if the distance $XC$ is short. First, look at the path on land. If we draw a perpendicular $XE$, we see that this path is shortened by the amount $EC$. Let us say we gain by not having to go that extra distance. On the other hand, in the water, by drawing a corresponding perpendicular, $CF$, we find that we have to go the extra distance $XF$, and that is what we lose. Or, in time, we gain the time it would have taken to go the distance $EC$, but we lose the time it would have taken to go the distance $XF$. Those times must be equal since, in the first approximation, there is to be no change in time. But supposing that in the water the speed is $1/n$ times as fast as in air, then we must have \begin{equation} \label{Eq:I:26:3} EC=n\cdot XF. \end{equation} Therefore we see that when we have the right point, $X\!C\sin E\!X\!C = n\cdot X\!C\sin X\!C\!F$ or, cancelling the common hypotenuse length $XC$ and noting that \begin{equation*} EXC=ECN=\theta_i \,\text{ and }\, XCF\approx BCN'=\theta_r\; (\text{when $X$ is near $C$}), \end{equation*} \begin{gather*} EXC=ECN=\theta_i \,\text{ and }\, XCF\approx BCN'=\theta_r\\ (\text{when $X$ is near $C$}), \end{gather*} we have \begin{equation} \label{Eq:I:26:4} \sin\theta_i=n\sin\theta_r. \end{equation} So we see that to get from one point to another in the least time when the ratio of speeds is $n$, the light should enter at such an angle that the ratio of the sines of the angles $\theta_i$ and $\theta_r$ is the ratio of the speeds in the two media.
1
26
Optics: The Principle of Least Time
4
Applications of Fermat’s principle
Now let us consider some of the interesting consequences of the principle of least time. First is the principle of reciprocity. If to go from $A$ to $B$ we have found the path of the least time, then to go in the opposite direction (assuming that light goes at the same speed in any direction), the shortest time will be the same path, and therefore, if light can be sent one way, it can be sent the other way. An example of interest is a glass block with plane parallel faces, set at an angle to a light beam. Light, in going through the block from a point $A$ to a point $B$ (Fig. 26–6) does not go through in a straight line, but instead it decreases the time in the block by making the angle in the block less inclined, although it loses a little bit in the air. The beam is simply displaced parallel to itself because the angles in and out are the same. A third interesting phenomenon is the fact that when we see the sun setting, it is already below the horizon! It does not look as though it is below the horizon, but it is (Fig. 26–7). The earth’s atmosphere is thin at the top and dense at the bottom. Light travels more slowly in air than it does in a vacuum, and so the light of the sun can get to point $S$ beyond the horizon more quickly if, instead of just going in a straight line, it avoids the dense regions where it goes slowly by getting through them at a steeper tilt. When it appears to go below the horizon, it is actually already well below the horizon. Another example of this phenomenon is the mirage that one often sees while driving on hot roads. One sees “water” on the road, but when he gets there, it is as dry as the desert! The phenomenon is the following. What we are really seeing is the sky light “reflected” on the road: light from the sky, heading for the road, can end up in the eye, as shown in Fig. 26–8. Why? The air is very hot just above the road but it is cooler up higher. Hotter air is more expanded than cooler air and is thinner, and this decreases the speed of light less. That is to say, light goes faster in the hot region than in the cool region. Therefore, instead of the light deciding to come in the straightforward way, it also has a least-time path by which it goes into the region where it goes faster for awhile, in order to save time. So, it can go in a curve. As another important example of the principle of least time, suppose that we would like to arrange a situation where we have all the light that comes out of one point, $P$, collected back together at another point, $P'$ (Fig. 26–9). That means, of course, that the light can go in a straight line from $P$ to $P'$. That is all right. But how can we arrange that not only does it go straight, but also so that the light starting out from $P$ toward $Q$ also ends up at $P'$? We want to bring all the light back to what we call a focus. How? If the light always takes the path of least time, then certainly it should not want to go over all these other paths. The only way that the light can be perfectly satisfied to take several adjacent paths is to make those times exactly equal! Otherwise, it would select the one of least time. Therefore the problem of making a focusing system is merely to arrange a device so that it takes the same time for the light to go on all the different paths! This is easy to do. Suppose that we had a piece of glass in which light goes slower than it does in the air (Fig. 26–10). Now consider a ray which goes in air in the path $PQP'$. That is a longer path than from $P$ directly to $P'$ and no doubt takes a longer time. But if we were to insert a piece of glass of just the right thickness (we shall later figure out how thick) it might exactly compensate the excess time that it would take the light to go at an angle! In those circumstances we can arrange that the time the light takes to go straight through is the same as the time it takes to go in the path $PQP'$. Likewise, if we take a ray $PRR'P'$ which is partly inclined, it is not quite as long as $PQP'$, and we do not have to compensate as much as for the straight one, but we do have to compensate somewhat. We end up with a piece of glass that looks like Fig. 26–10. With this shape, all the light which comes from $P$ will go to $P'$. This, of course, is well known to us, and we call such a device a converging lens. In the next chapter we shall actually calculate what shape the lens has to have to make a perfect focus. Take another example: suppose we wish to arrange some mirrors so that the light from $P$ always goes to $P'$ (Fig. 26–11). On any path, it goes to some mirror and comes back, and all times must be equal. Here the light always travels in air, so the time and the distance are proportional. Therefore the statement that all the times are the same is the same as the statement that the total distance is the same. Thus the sum of the two distances $r_1$ and $r_2$ must be a constant. An ellipse is that curve which has the property that the sum of the distances from two points is a constant for every point on the ellipse; thus we can be sure that the light from one focus will come to the other. The same principle works for gathering the light of a star. The great $200$-inch Palomar telescope is built on the following principle. Imagine a star billions of miles away; we would like to cause all the light that comes in to come to a focus. Of course we cannot draw the rays that go all the way up to the star, but we still want to check whether the times are equal. Of course we know that when the various rays have arrived at some plane $KK'$, perpendicular to the rays, all the times in this plane are equal (Fig. 26–12). The rays must then come down to the mirror and proceed toward $P'$ in equal times. That is, we must find a curve which has the property that the sum of the distances $XX' + X'P'$ is a constant, no matter where $X$ is chosen. An easy way to find it is to extend the length of the line $XX'$ down to a plane $LL'$. Now if we arrange our curve so that $A'A'' = A'P'$, $B'B'' = B'P'$, $C'C'' = C'P'$, and so on, we will have our curve, because then of course, $AA' + A'P' = AA' + A'A''$ will be constant. Thus our curve is the locus of all points equidistant from a line and a point. Such a curve is called a parabola; the mirror is made in the shape of a parabola. The above examples illustrate the principle upon which such optical devices can be designed. The exact curves can be calculated using the principle that, to focus perfectly, the travel times must be exactly equal for all light rays, as well as being less than for any other nearby path. We shall discuss these focusing optical devices further in the next chapter; let us now discuss the further development of the theory. When a new theoretical principle is developed, such as the principle of least time, our first inclination might be to say, “Well, that is very pretty; it is delightful; but the question is, does it help at all in understanding the physics?” Someone may say, “Yes, look at how many things we can now understand!” Another says, “Very well, but I can understand mirrors, too. I need a curve such that every tangent plane makes equal angles with the two rays. I can figure out a lens, too, because every ray that comes to it is bent through an angle given by Snell’s law.” Evidently the statement of least time and the statement that angles are equal on reflection, and that the sines of the angles are proportional on refraction, are the same. So is it merely a philosophical question, or one of beauty? There can be arguments on both sides. However, the importance of a powerful principle is that it predicts new things. It is easy to show that there are a number of new things predicted by Fermat’s principle. First, suppose that there are three media, glass, water, and air, and we perform a refraction experiment and measure the index $n$ for one medium against another. Let us call $n_{12}$ the index of air ($1$) against water ($2$); $n_{13}$ the index of air ($1$) against glass ($3$). If we measured water against glass, we should find another index, which we shall call $n_{23}$. But there is no a priori reason why there should be any connection between $n_{12}$, $n_{13}$, and $n_{23}$. On the other hand, according to the idea of least time, there is a definite relationship. The index $n_{12}$ is the ratio of two things, the speed in air to the speed in water; $n_{13}$ is the ratio of the speed in air to the speed in glass; $n_{23}$ is the ratio of the speed in water to the speed in glass. Therefore we cancel out the air, and get \begin{equation} \label{Eq:I:26:5} n_{23}=\frac{v_2}{v_3}=\frac{v_1/v_3}{v_1/v_2}=\frac{n_{13}}{n_{12}}. \end{equation} In other words, we predict that the index for a new pair of materials can be obtained from the indexes of the individual materials, both against air or against vacuum. So if we measure the speed of light in all materials, and from this get a single number for each material, namely its index relative to vacuum, called $n_i$ ($n_1$ is the speed in vacuum relative to the speed in air, etc.), then our formula is easy. The index for any two materials $i$ and $j$ is \begin{equation} \label{Eq:I:26:6} n_{ij}=\frac{v_i}{v_j}=\frac{n_j}{n_i}. \end{equation} Using only Snell’s law, there is no basis for a prediction of this kind.1 But of course this prediction works. The relation (26.5) was known very early, and was a very strong argument for the principle of least time. Another argument for the principle of least time, another prediction, is that if we measure the speed of light in water, it will be lower than in air. This is a prediction of a completely different type. It is a brilliant prediction, because all we have so far measured are angles; here we have a theoretical prediction which is quite different from the observations from which Fermat deduced the idea of least time. It turns out, in fact, that the speed in water is slower than the speed in air, by just the proportion that is needed to get the right index!
1
26
Optics: The Principle of Least Time
5
A more precise statement of Fermat’s principle
Actually, we must make the statement of the principle of least time a little more accurately. It was not stated correctly above. It is incorrectly called the principle of least time and we have gone along with the incorrect description for convenience, but we must now see what the correct statement is. Suppose we had a mirror as in Fig. 26–3. What makes the light think it has to go to the mirror? The path of least time is clearly $AB$. So some people might say, “Sometimes it is a maximum time.” It is not a maximum time, because certainly a curved path would take a still longer time! The correct statement is the following: a ray going in a certain particular path has the property that if we make a small change (say a one percent shift) in the ray in any manner whatever, say in the location at which it comes to the mirror, or the shape of the curve, or anything, there will be no first-order change in the time; there will be only a second-order change in the time. In other words, the principle is that light takes a path such that there are many other paths nearby which take almost exactly the same time. The following is another difficulty with the principle of least time, and one which people who do not like this kind of a theory could never stomach. With Snell’s theory we can “understand” light. Light goes along, it sees a surface, it bends because it does something at the surface. The idea of causality, that it goes from one point to another, and another, and so on, is easy to understand. But the principle of least time is a completely different philosophical principle about the way nature works. Instead of saying it is a causal thing, that when we do one thing, something else happens, and so on, it says this: we set up the situation, and light decides which is the shortest time, or the extreme one, and chooses that path. But what does it do, how does it find out? Does it smell the nearby paths, and check them against each other? The answer is, yes, it does, in a way. That is the feature which is, of course, not known in geometrical optics, and which is involved in the idea of wavelength; the wavelength tells us approximately how far away the light must “smell” the path in order to check it. It is hard to demonstrate this fact on a large scale with light, because the wavelengths are so terribly short. But with radiowaves, say $3$-cm waves, the distances over which the radiowaves are checking are larger. If we have a source of radiowaves, a detector, and a slit, as in Fig. 26–13, the rays of course go from $S$ to $D$ because it is a straight line, and if we close down the slit it is all right—they still go. But now if we move the detector aside to $D'$, the waves will not go through the wide slit from $S$ to $D'$, because they check several paths nearby, and say, “No, my friend, those all correspond to different times.” On the other hand, if we prevent the radiation from checking the paths by closing the slit down to a very narrow crack, then there is but one path available, and the radiation takes it! With a narrow slit, more radiation reaches $D'$ than reaches it with a wide slit! One can do the same thing with light, but it is hard to demonstrate on a large scale. The effect can be seen under the following simple conditions. Find a small, bright light, say an unfrosted bulb in a street light far away or the reflection of the sun in a curved automobile bumper. Then put two fingers in front of one eye, so as to look through the crack, and squeeze the light to zero very gently. You will see that the image of the light, which was a little dot before, becomes quite elongated, and even stretches into a long line. The reason is that the fingers are very close together, and the light which is supposed to come in a straight line is spread out at an angle, so that when it comes into the eye it comes in from several directions. Also you will notice, if you are very careful, side maxima, a lot of fringes along the edges too. Furthermore, the whole thing is colored. All of this will be explained in due time, but for the present it is a demonstration that light does not always go in straight lines, and it is one that is very easily performed.
1
26
Optics: The Principle of Least Time
6
How it works
Finally, we give a very crude view of what actually happens, how the whole thing really works, from what we now believe is the correct, quantum-dynamically accurate viewpoint, but of course only qualitatively described. In following the light from $A$ to $B$ in Fig. 26–3, we find that the light does not seem to be in the form of waves at all. Instead the rays seem to be made up of photons, and they actually produce clicks in a photon counter, if we are using one. The brightness of the light is proportional to the average number of photons that come in per second, and what we calculate is the chance that a photon gets from $A$ to $B$, say by hitting the mirror. The law for that chance is the following very strange one. Take any path and find the time for that path; then make a complex number, or draw a little complex vector, $\rho e^{i\theta}$, whose angle $\theta$ is proportional to the time. The number of turns per second is the frequency of the light. Now take another path; it has, for instance, a different time, so the vector for it is turned through a different angle—the angle being always proportional to the time. Take all the available paths and add on a little vector for each one; then the answer is that the chance of arrival of the photon is proportional to the square of the length of the final vector, from the beginning to the end! Now let us show how this implies the principle of least time for a mirror. We consider all rays, all possible paths $ADB$, $AEB$, $ACB$, etc., in Fig. 26–3. The path $ADB$ makes a certain small contribution, but the next path, $AEB$, takes a quite different time, so its angle $\theta$ is quite different. Let us say that point $C$ corresponds to minimum time, where if we change the paths the times do not change. So for awhile the times do change, and then they begin to change less and less as we get near point $C$ (Fig. 26–14). So the arrows which we have to add are coming almost exactly at the same angle for awhile near $C$, and then gradually the time begins to increase again, and the phases go around the other way, and so on. Eventually, we have quite a tight knot. The total probability is the distance from one end to the other, squared. Almost all of that accumulated probability occurs in the region where all the arrows are in the same direction (or in the same phase). All the contributions from the paths which have very different times as we change the path, cancel themselves out by pointing in different directions. That is why, if we hide the extreme parts of the mirror, it still reflects almost exactly the same, because all we did was to take out a piece of the diagram inside the spiral ends, and that makes only a very small change in the light. So this is the relationship between the ultimate picture of photons with a probability of arrival depending on an accumulation of arrows, and the principle of least time.
1
27
Geometrical Optics
1
Introduction
In this chapter we shall discuss some elementary applications of the ideas of the previous chapter to a number of practical devices, using the approximation called geometrical optics. This is a most useful approximation in the practical design of many optical systems and instruments. Geometrical optics is either very simple or else it is very complicated. By that we mean that we can either study it only superficially, so that we can design instruments roughly, using rules that are so simple that we hardly need deal with them here at all, since they are practically of high school level, or else, if we want to know about the small errors in lenses and similar details, the subject gets so complicated that it is too advanced to discuss here! If one has an actual, detailed problem in lens design, including analysis of aberrations, then he is advised to read about the subject or else simply to trace the rays through the various surfaces (which is what the book tells how to do), using the law of refraction from one side to the other, and to find out where they come out and see if they form a satisfactory image. People have said that this is too tedious, but today, with computing machines, it is the right way to do it. One can set up the problem and make the calculation for one ray after another very easily. So the subject is really ultimately quite simple, and involves no new principles. Furthermore, it turns out that the rules of either elementary or advanced optics are seldom characteristic of other fields, so that there is no special reason to follow the subject very far, with one important exception. The most advanced and abstract theory of geometrical optics was worked out by Hamilton, and it turns out that this has very important applications in mechanics. It is actually even more important in mechanics than it is in optics, and so we leave Hamilton’s theory for the subject of advanced analytical mechanics, which is studied in the senior year or in graduate school. So, appreciating that geometrical optics contributes very little, except for its own sake, we now go on to discuss the elementary properties of simple optical systems on the basis of the principles outlined in the last chapter. In order to go on, we must have one geometrical formula, which is the following: if we have a triangle with a small altitude $h$ and a long base $d$, then the diagonal $s$ (we are going to need it to find the difference in time between two different routes) is longer than the base (Fig. 27–1). How much longer? The difference $\Delta = s - d$ can be found in a number of ways. One way is this. We see that $s^2 - d^2 = h^2$, or $(s - d)(s + d) = h^2$. But $s - d = \Delta$, and $s + d \approx 2s$. Thus \begin{equation} \label{Eq:I:27:1} \Delta \approx h^2/2s. \end{equation}This is all the geometry we need to discuss the formation of images by curved surfaces!
1
27
Geometrical Optics
2
The focal length of a spherical surface
The first and simplest situation to discuss is a single refracting surface, separating two media with different indices of refraction (Fig. 27–2). We leave the case of arbitrary indices of refraction to the student, because ideas are always the most important thing, not the specific situation, and the problem is easy enough to do in any case. So we shall suppose that, on the left, the speed is $1$ and on the right it is $1/n$, where $n$ is the index of refraction. The light travels more slowly in the glass by a factor $n$. Now suppose that we have a point at $O$, at a distance $s$ from the front surface of the glass, and another point $O'$ at a distance $s'$ inside the glass, and we desire to arrange the curved surface in such a manner that every ray from $O$ which hits the surface, at any point $P$, will be bent so as to proceed toward the point $O'$. For that to be true, we have to shape the surface in such a way that the time it takes for the light to go from $O$ to $P$, that is, the distance $OP$ divided by the speed of light (the speed here is unity), plus $n \cdot O'P$, which is the time it takes to go from $P$ to $O'$, is equal to a constant independent of the point $P$. This condition supplies us with an equation for determining the surface. The answer is that the surface is a very complicated fourth-degree curve, and the student may entertain himself by trying to calculate it by analytic geometry. It is simpler to try a special case that corresponds to $s \to \infty$, because then the curve is a second-degree curve and is more recognizable. It is interesting to compare this curve with the parabolic curve we found for a focusing mirror when the light is coming from infinity. So the proper surface cannot easily be made—to focus the light from one point to another requires a rather complicated surface. It turns out in practice that we do not try to make such complicated surfaces ordinarily, but instead we make a compromise. Instead of trying to get all the rays to come to a focus, we arrange it so that only the rays fairly close to the axis $OO'$ come to a focus. The farther ones may deviate if they want to, unfortunately, because the ideal surface is complicated, and we use instead a spherical surface with the right curvature at the axis. It is so much easier to fabricate a sphere than other surfaces that it is profitable for us to find out what happens to rays striking a spherical surface, supposing that only the rays near the axis are going to be focused perfectly. Those rays which are near the axis are sometimes called paraxial rays, and what we are analyzing are the conditions for the focusing of paraxial rays. We shall discuss later the errors that are introduced by the fact that all rays are not always close to the axis. Thus, supposing $P$ is close to the axis, we drop a perpendicular $PQ$ such that the height $PQ$ is $h$. For a moment, we imagine that the surface is a plane passing through $P$. In that case, the time needed to go from $O$ to $P$ would exceed the time from $O$ to $Q$, and also, the time from $P$ to $O'$ would exceed the time from $Q$ to $O'$. But that is why the glass must be curved, because the total excess time must be compensated by the delay in passing from $V$ to $Q$! Now the excess time along route $OP$ is $h^2/2s$, and the excess time on the other route is $nh^2/2s'$. This excess time, which must be matched by the delay in going along $VQ$, differs from what it would have been in a vacuum, because there is a medium present. In other words, the time to go from $V$ to $Q$ is not as if it were straight in the air, but it is slower by the factor $n$, so that the excess delay in this distance is then $(n - 1)VQ$. And now, how large is $VQ$? If the point $C$ is the center of the sphere and if its radius is $R$, we see by the same formula that the distance $VQ$ is equal to $h^2/2R$. Therefore we discover that the law that connects the distances $s$ and $s'$, and that gives us the radius of curvature $R$ of the surface that we need, is \begin{equation} \label{Eq:I:27:2} (h^2/2s) + (nh^2/2s') = (n-1)h^2/2R \end{equation} or \begin{equation} \label{Eq:I:27:3} (1/s)+(n/s')=(n-1)/R. \end{equation} If we have a position $O$ and another position $O'$, and want to focus light from $O$ to $O'$, then we can calculate the required radius of curvature $R$ of the surface by this formula. Now it turns out, interestingly, that the same lens, with the same curvature $R$, will focus for other distances, namely, for any pair of distances such that the sum of the two reciprocals, one multiplied by $n$, is a constant. Thus a given lens will (so long as we limit ourselves to paraxial rays) focus not only from $O$ to $O'$, but between an infinite number of other pairs of points, so long as those pairs of points bear the relationship that $1/s + n/s'$ is a constant, characteristic of the lens. In particular, an interesting case is that in which $s \to \infty$. We can see from the formula that as one $s$ increases, the other decreases. In other words, if point $O$ goes out, point $O'$ comes in, and vice versa. As point $O$ goes toward infinity, point $O'$ keeps moving in until it reaches a certain distance, called the focal length $f'$, inside the material. If parallel rays come in, they will meet the axis at a distance $f'$. Likewise, we could imagine it the other way. (Remember the reciprocity rule: if light will go from $O$ to $O'$, of course it will also go from $O'$ to $O$.) Therefore, if we had a light source inside the glass, we might want to know where the focus is. In particular, if the light in the glass were at infinity (same problem) where would it come to a focus outside? This distance is called $f$. Of course, we can also put it the other way. If we had a light source at $f$ and the light went through the surface, then it would go out as a parallel beam. We can easily find out what $f$ and $f'$ are: \begin{alignat}{3} \label{Eq:I:27:4} &n/f'&&=(n-1)/R\quad\text{or}\quad f'&&=Rn/(n-1),\\[2ex] \label{Eq:I:27:5} &1/f&&=(n-1)/R\quad\text{or}\quad f&&=R/(n-1). \end{alignat} We see an interesting thing: if we divide each focal length by the corresponding index of refraction we get the same result! This theorem, in fact, is general. It is true of any system of lenses, no matter how complicated, so it is interesting to remember. We did not prove here that it is general—we merely noted it for a single surface, but it happens to be true in general that the two focal lengths of a system are related in this way. Sometimes Eq. (27.3) is written in the form \begin{equation} \label{Eq:I:27:6} 1/s+n/s'=1/f. \end{equation} This is more useful than (27.3) because we can measure $f$ more easily than we can measure the curvature and index of refraction of the lens: if we are not interested in designing a lens or in knowing how it got that way, but simply lift it off a shelf, the interesting quantity is $f$, not the $n$ and the $1$ and the $R$! Now an interesting situation occurs if $s$ becomes less than $f$. What happens then? If $s < f$, then $(1/s) > (1/f)$, and therefore $s'$ is negative; our equation says that the light will focus only with a negative value of $s'$, whatever that means! It does mean something very interesting and very definite. It is still a useful formula, in other words, even when the numbers are negative. What it means is shown in Fig. 27–3. If we draw the rays which are diverging from $O$, they will be bent, it is true, at the surface, and they will not come to a focus, because $O$ is so close in that they are “beyond parallel.” However, they diverge as if they had come from a point $O'$ outside the glass. This is an apparent image, sometimes called a virtual image. The image $O'$ in Fig. 27–2 is called a real image. If the light really comes to a point, it is a real image. But if the light appears to be coming from a point, a fictitious point different from the original point, it is a virtual image. So when $s'$ comes out negative, it means that $O'$ is on the other side of the surface, and everything is all right. Now consider the interesting case where $R$ is equal to infinity; then we have $(1/s) + (n/s') = 0$. In other words, $s' = -ns$, which means that if we look from a dense medium into a rare medium and see a point in the rare medium, it appears to be deeper by a factor $n$. Likewise, we can use the same equation backwards, so that if we look into a plane surface at an object that is at a certain distance inside the dense medium, it will appear as though the light is coming from not as far back (Fig. 27–4). When we look at the bottom of a swimming pool from above, it does not look as deep as it really is, by a factor $3/4$, which is the reciprocal of the index of refraction of water. We could go on, of course, to discuss the spherical mirror. But if one appreciates the ideas involved, he should be able to work it out for himself. Therefore we leave it to the student to work out the formula for the spherical mirror, but we mention that it is well to adopt certain conventions concerning the distances involved: In Fig. 27–2, for example, $s$, $s'$, and $R$ are all positive; in Fig. 27–3, $s$ and $R$ are positive, but $s'$ is negative. If we had used a concave surface, our formula (27.3) would still give the correct result if we merely make $R$ a negative quantity. In working out the corresponding formula for a mirror, using the above conventions, you will find that if you put $n=-1$ throughout the formula (27.3) (as though the material behind the mirror had an index $-1$), the right formula for a mirror results! Although the derivation of formula (27.3) is simple and elegant, using least time, one can of course work out the same formula using Snell’s law, remembering that the angles are so small that the sines of angles can be replaced by the angles themselves.
1
27
Geometrical Optics
3
The focal length of a lens
Now we go on to consider another situation, a very practical one. Most of the lenses that we use have two surfaces, not just one. How does this affect matters? Suppose that we have two surfaces of different curvature, with glass filling the space between them (Fig. 27–5). We want to study the problem of focusing from a point $O$ to an alternate point $O'$. How can we do that? The answer is this: First, use formula (27.3) for the first surface, forgetting about the second surface. This will tell us that the light which was diverging from $O$ will appear to be converging or diverging, depending on the sign, from some other point, say $O'$. Now we consider a new problem. We have a different surface, between glass and air, in which rays are converging toward a certain point $O'$. Where will they actually converge? We use the same formula again! We find that they converge at $O''$. Thus, if necessary, we can go through $75$ surfaces by just using the same formula in succession, from one to the next! There are some rather high-class formulas that would save us considerable energy in the few times in our lives that we might have to chase the light through five surfaces, but it is easier just to chase it through five surfaces when the problem arises than it is to memorize a lot of formulas, because it may be we will never have to chase it through any surfaces at all! In any case, the principle is that when we go through one surface we find a new position, a new focal point, and then take that point as the starting point for the next surface, and so on. In order to actually do this, since on the second surface we are going from $n$ to $1$ rather than from $1$ to $n$, and since in many systems there is more than one kind of glass, so that there are indices $n_1$, $n_2$, …, we really need a generalization of formula (27.3) for a case where there are two different indices, $n_1$ and $n_2$, rather than only $n$. Then it is not difficult to prove that the general form of (27.3) is \begin{equation} \label{Eq:I:27:7} (n_1/s)+(n_2/s')=(n_2-n_1)/R. \end{equation} Particularly simple is the special case in which the two surfaces are very close together—so close that we may ignore small errors due to the thickness. If we draw the lens as shown in Fig. 27–6, we may ask this question: How must the lens be built so as to focus light from $O$ to $O'$? Suppose the light comes exactly to the edge of the lens, at point $P$. Then the excess time in going from $O$ to $O'$ is $(n_1h^2/2s) + (n_1h^2/2s')$, ignoring for a moment the presence of the thickness $T$ of glass of index $n_2$. Now, to make the time for the direct path equal to that for the path $OPO'$, we have to use a piece of glass whose thickness $T$ at the center is such that the delay introduced in going through this thickness is enough to compensate for the excess time above. Therefore the thickness of the lens at the center must be given by the relationship \begin{equation} \label{Eq:I:27:8} (n_1h^2/2s)+(n_1h^2/2s')=(n_2-n_1)T. \end{equation} We can also express $T$ in terms of the radii $R_1$ and $R_2$ of the two surfaces. Paying attention to our convention (3), we thus find, for $R_1 < R_2$ (a convex lens), \begin{equation} \label{Eq:I:27:9} T = (h^2/2R_1) - (h^2/2R_2). \end{equation} Therefore, we finally get \begin{equation} \label{Eq:I:27:10} (n_1/s)+(n_1/s') = (n_2-n_1)(1/R_1-1/R_2). \end{equation} Now we note again that if one of the points is at infinity, the other will be at a point which we will call the focal length $f$. The focal length $f$ is given by \begin{equation} \label{Eq:I:27:11} 1/f = (n-1)(1/R_1-1/R_2). \end{equation} where $n= n_2/n_1$. Now, if we take the opposite case, where $s$ goes to infinity, we see that $s'$ is at the focal length $f'$. This time the focal lengths are equal. (This is another special case of the general rule that the ratio of the two focal lengths is the ratio of the indices of refraction in the two media in which the rays focus. In this particular optical system, the initial and final indices are the same, so the two focal lengths are equal.) Forgetting for a moment about the actual formula for the focal length, if we bought a lens that somebody designed with certain radii of curvature and a certain index, we could measure the focal length, say, by seeing where a point at infinity focuses. Once we had the focal length, it would be better to write our equation in terms of the focal length directly, and the formula then is \begin{equation} \label{Eq:I:27:12} (1/s) + (1/s') = 1/f. \end{equation} Now let us see how the formula works and what it implies in different circumstances. First, it implies that if $s$ or $s'$ is infinite the other one is $f$. That means that parallel light focuses at a distance $f$, and this in effect defines $f$. Another interesting thing it says is that both points move in the same direction. If one moves to the right, the other does also. Another thing it says is that $s$ and $s'$ are equal if they are both equal to $2f$. In other words, if we want a symmetrical situation, we find that they will both focus at a distance $2f$.
1
27
Geometrical Optics
4
Magnification
So far we have discussed the focusing action only for points on the axis. Now let us discuss also the imaging of objects not exactly on the axis, but a little bit off, so that we can understand the properties of magnification. When we set up a lens so as to focus light from a small filament onto a “point” on a screen, we notice that on the screen we get a “picture” of the same filament, except of a larger or smaller size than the true filament. This must mean that the light comes to a focus from each point of the filament. In order to understand this a little better, let us analyze the thin lens system shown schematically in Fig. 27–7. We know the following facts: This is all we need to establish formula (27.12) by geometry, as follows: Suppose we have an object at some distance $x$ from the focus; let the height of the object be $y$. Then we know that one of the rays, namely $PQ$, will be bent so as to pass through the focus $R$ on the other side. Now if the lens will focus point $P$ at all, we can find out where if we find out where just one other ray goes, because the new focus will be where the two intersect again. We need only use our ingenuity to find the exact direction of one other ray. But we remember that a parallel ray goes through the focus and vice versa: a ray which goes through the focus will come out parallel! So we draw ray $PT$ through $U$. (It is true that the actual rays which are doing the focusing may be much more limited than the two we have drawn, but they are harder to figure, so we make believe that we can make this ray.) Since it would come out parallel, we draw $TS$ parallel to $XW$. The intersection $S$ is the point we need. This will determine the correct place and the correct height. Let us call the height $y'$ and the distance from the focus, $x'$. Now we may derive a lens formula. Using the similar triangles $PVU$ and $TXU$, we find \begin{equation} \label{Eq:I:27:13} \frac{y'}{f}=\frac{y}{x}. \end{equation} Similarly, from triangles $SWR$ and $QXR$, we get \begin{equation} \label{Eq:I:27:14} \frac{y'}{x'}=\frac{y}{f}. \end{equation} Solving each for $y'/y$, we find that \begin{equation} \label{Eq:I:27:15} \frac{y'}{y}=\frac{x'}{f}=\frac{f}{x}. \end{equation} Equation (27.15) is the famous lens formula; in it is everything we need to know about lenses: It tells us the magnification, $y'/y$, in terms of the distances and the focal lengths. It also connects the two distances $x$ and $x'$ with $f$: \begin{equation} \label{Eq:I:27:16} xx'=f^2, \end{equation} which is a much neater form to work with than Eq. (27.12). We leave it to the student to demonstrate that if we call $s = x + f$ and $s' = x' + f$, Eq. (27.12) is the same as Eq. (27.16).
1
27
Geometrical Optics
5
Compound lenses
Without actually deriving it, we shall briefly describe the general result when we have a number of lenses. If we have a system of several lenses, how can we possibly analyze it? That is easy. We start with some object and calculate where its image is for the first lens, using formula (27.16) or (27.12) or any other equivalent formula, or by drawing diagrams. So we find an image. Then we treat this image as the source for the next lens, and use the second lens with whatever its focal length is to again find an image. We simply chase the thing through the succession of lenses. That is all there is to it. It involves nothing new in principle, so we shall not go into it. However, there is a very interesting net result of the effects of any sequence of lenses on light that starts and ends up in the same medium, say air. Any optical instrument—a telescope or a microscope with any number of lenses and mirrors—has the following property: There exist two planes, called the principal planes of the system (these planes are often fairly close to the first surface of the first lens and the last surface of the last lens), which have the following properties: (1) If light comes into the system parallel from the first side, it comes out at a certain focus, at a distance from the second principal plane equal to the focal length, just as though the system were a thin lens situated at this plane. (2) If parallel light comes in the other way, it comes to a focus at the same distance $f$ from the first principal plane, again as if a thin lens where situated there. (See Fig. 27–8.) Of course, if we measure the distances $x$ and $x'$, and $y$ and $y'$ as before, the formula (27.16) that we have written for the thin lens is absolutely general, provided that we measure the focal length from the principal planes and not from the center of the lens. It so happens that for a thin lens the principal planes are coincident. It is just as though we could take a thin lens, slice it down the middle, and separate it, and not notice that it was separated. Every ray that comes in pops out immediately on the other side of the second plane from the same point as it went into the first plane! The principal planes and the focal length may be found either by experiment or by calculation, and then the whole set of properties of the optical system are described. It is very interesting that the result is not complicated when we are all finished with such a big, complicated optical system.
1
27
Geometrical Optics
6
Aberrations
Before we get too excited about how marvelous lenses are, we must hasten to add that there are also serious limitations, because of the fact that we have limited ourselves, strictly speaking, to paraxial rays, the rays near the axis. A real lens having a finite size will, in general, exhibit aberrations. For example, a ray that is on the axis, of course, goes through the focus; a ray that is very close to the axis will still come to the focus very well. But as we go farther out, the ray begins to deviate from the focus, perhaps by falling short, and a ray striking near the top edge comes down and misses the focus by quite a wide margin. So, instead of getting a point image, we get a smear. This effect is called spherical aberration, because it is a property of the spherical surfaces we use in place of the right shape. This could be remedied, for any specific object distance, by re-forming the shape of the lens surface, or perhaps by using several lenses arranged so that the aberrations of the individual lenses tend to cancel each other. Lenses have another fault: light of different colors has different speeds, or different indices of refraction, in the glass, and therefore the focal length of a given lens is different for different colors. So if we image a white spot, the image will have colors, because when we focus for the red, the blue is out of focus, or vice versa. This property is called chromatic aberration. There are still other faults. If the object is off the axis, then the focus really isn’t perfect any more, when it gets far enough off the axis. The easiest way to verify this is to focus a lens and then tilt it so that the rays are coming in at a large angle from the axis. Then the image that is formed will usually be quite crude, and there may be no place where it focuses well. There are thus several kinds of errors in lenses that the optical designer tries to remedy by using many lenses to compensate each other’s errors. How careful do we have to be to eliminate aberrations? Is it possible to make an absolutely perfect optical system? Suppose we had built an optical system that is supposed to bring light exactly to a point. Now, arguing from the point of view of least time, can we find a condition on how perfect the system has to be? The system will have some kind of an entrance opening for the light. If we take the farthest ray from the axis that can come to the focus (if the system is perfect, of course), the times for all rays are exactly equal. But nothing is perfect, so the question is, how wrong can the time be for this ray and not be worth correcting any further? That depends on how perfect we want to make the image. But suppose we want to make the image as perfect as it possibly can be made. Then, of course, our impression is that we have to arrange that every ray takes as nearly the same time as possible. But it turns out that this is not true, that beyond a certain point we are trying to do something that is too fine, because the theory of geometrical optics does not work! Remember that the principle of least time is not an accurate formulation, unlike the principle of conservation of energy or the principle of conservation of momentum. The principle of least time is only an approximation, and it is interesting to know how much error can be allowed and still not make any apparent difference. The answer is that if we have arranged that between the maximal ray—the worst ray, the ray that is farthest out—and the central ray, the difference in time is less than about the period that corresponds to one oscillation of the light, then there is no use improving it any further. Light is an oscillatory thing with a definite frequency that is related to the wavelength, and if we have arranged that the time difference for different rays is less than about a period, there is no use going any further.
1
27
Geometrical Optics
7
Resolving power
Another interesting question—a very important technical question with all optical instruments—is how much resolving power they have. If we build a microscope, we want to see the objects that we are looking at. That means, for instance, that if we are looking at a bacterium with a spot on each end, we want to see that there are two dots when we magnify them. One might think that all we have to do is to get enough magnification—we can always add another lens, and we can always magnify again and again, and with the cleverness of designers, all the spherical aberrations and chromatic aberrations can be cancelled out, and there is no reason why we cannot keep on magnifying the image. So the limitations of a microscope are not that it is impossible to build a lens that magnifies more than $2000$ diameters. We can build a system of lenses that magnifies $10{,}000$ diameters, but we still could not see two points that are too close together because of the limitations of geometrical optics, because of the fact that least time is not precise. To discover the rule that determines how far apart two points have to be so that at the image they appear as separate points can be stated in a very beautiful way associated with the time it takes for different rays. Suppose that we disregard the aberrations now, and imagine that for a particular point $P$ (Fig. 27–9) all the rays from object to image $T$ take exactly the same time. (It is not true, because it is not a perfect system, but that is another problem.) Now take another nearby point, $P'$, and ask whether its image will be distinct from $T$. In other words, whether we can make out the difference between them. Of course, according to geometrical optics, there should be two point images, but what we see may be rather smeared and we may not be able to make out that there are two points. The condition that the second point is focused in a distinctly different place from the first one is that the two times for the extreme rays $P'ST$ and $P'RT$ on each side of the big opening of the lenses to go from one end to the other, must not be equal from the two possible object points to a given image point. Why? Because, if the times were equal, of course both would focus at the same point. So the times are not going to be equal. But by how much do they have to differ so that we can say that both do not come to a common focus, so that we can distinguish the two image points? The general rule for the resolution of any optical instrument is this: two different point sources can be resolved only if one source is focused at such a point that the times for the maximal rays from the other source to reach that point, as compared with its own true image point, differ by more than one period. It is necessary that the difference in time between the top ray and the bottom ray to the wrong focus shall exceed a certain amount, namely, approximately the period of oscillation of the light: \begin{equation} \label{Eq:I:27:17} t_2-t_1 > 1/\nu, \end{equation} where $\nu$ is the frequency of the light (number of oscillations per second; also speed divided by wavelength). If the distance of separation of the two points is called $D$, and if the opening half-angle of the lens is called $\theta$, then one can demonstrate that (27.17) is exactly equivalent to the statement that $D$ must exceed $\lambda/2n\sin\theta$, where $n$ is the index of refraction at $P$ and $\lambda$ is the wavelength. The smallest things that we can see are therefore approximately the wavelength of light. A corresponding formula exists for telescopes, which tells us the smallest difference in angle between two stars that can just be distinguished.1
1
28
Electromagnetic Radiation
1
Electromagnetism
The most dramatic moments in the development of physics are those in which great syntheses take place, where phenomena which previously had appeared to be different are suddenly discovered to be but different aspects of the same thing. The history of physics is the history of such syntheses, and the basis of the success of physical science is mainly that we are able to synthesize. Perhaps the most dramatic moment in the development of physics during the 19th century occurred to J. C. Maxwell one day in the 1860s, when he combined the laws of electricity and magnetism with the laws of the behavior of light. As a result, the properties of light were partly unravelled—that old and subtle stuff that is so important and mysterious that it was felt necessary to arrange a special creation for it when writing Genesis. Maxwell could say, when he was finished with his discovery, “Let there be electricity and magnetism, and there is light!” For this culminating moment there was a long preparation in the gradual discovery and unfolding of the laws of electricity and magnetism. This story we shall reserve for detailed study next year. However, the story is, briefly, as follows. The gradually discovered properties of electricity and magnetism, of electric forces of attraction and repulsion, and of magnetic forces, showed that although these forces were rather complex, they all fell off inversely as the square of the distance. We know, for example, that the simple Coulomb law for stationary charges is that the electric force field varies inversely as the square of the distance. As a consequence, for sufficiently great distances there is very little influence of one system of charges on another. Maxwell noted that the equations or the laws that had been discovered up to this time were mutually inconsistent when he tried to put them all together, and in order for the whole system to be consistent, he had to add another term to his equations. With this new term there came an amazing prediction, which was that a part of the electric and magnetic fields would fall off much more slowly with the distance than the inverse square, namely, inversely as the first power of the distance! And so he realized that electric currents in one place can affect other charges far away, and he predicted the basic effects with which we are familiar today—radio transmission, radar, and so on. It seems a miracle that someone talking in Europe can, with mere electrical influences, be heard thousands of miles away in Los Angeles. How is it possible? It is because the fields do not vary as the inverse square, but only inversely as the first power of the distance. Finally, then, even light itself was recognized to be electric and magnetic influences extending over vast distances, generated by an almost incredibly rapid oscillation of the electrons in the atoms. All these phenomena we summarize by the word radiation or, more specifically, electromagnetic radiation, there being one or two other kinds of radiation also. Almost always, radiation means electromagnetic radiation. And thus is the universe knit together. The atomic motions of a distant star still have sufficient influence at this great distance to set the electrons in our eye in motion, and so we know about the stars. If this law did not exist, we would all be literally in the dark about the exterior world! And the electric surgings in a galaxy five billion light years away—which is the farthest object we have found so far—can still influence in a significant and detectable way the currents in the great “dish” in front of a radio telescope. And so it is that we see the stars and the galaxies. This remarkable phenomenon is what we shall discuss in the present chapter. At the beginning of this course in physics we outlined a broad picture of the world, but we are now better prepared to understand some aspects of it, and so we shall now go over some parts of it again in greater detail. We begin by describing the position of physics at the end of the 19th century. All that was then known about the fundamental laws can be summarized as follows. First, there were laws of forces: one force was the law of gravitation, which we have written down several times; the force on an object of mass $m$, due to another of mass $M$, is given by \begin{equation} \label{Eq:I:28:1} \FLPF=GmM\FLPe_r/r^2, \end{equation} where $\FLPe_r$ is a unit vector directed from $m$ to $M$, and $r$ is the distance between them. Next, the laws of electricity and magnetism, as known at the end of the 19th century, are these: the electrical forces acting on a charge $q$ can be described by two fields, called $\FLPE$ and $\FLPB$, and the velocity $\FLPv$ of the charge $q$, by the equation \begin{equation} \label{Eq:I:28:2} \FLPF=q(\FLPE+\FLPv\times\FLPB). \end{equation} To complete this law, we have to say what the formulas for $\FLPE$ and $\FLPB$ are in a given circumstance: if a number of charges are present, $\FLPE$ and the $\FLPB$ are each the sum of contributions, one from each individual charge. So if we can find the $\FLPE$ and $\FLPB$ produced by a single charge, we need only to add all the effects from all the charges in the universe to get the total $\FLPE$ and $\FLPB$! This is the principle of superposition. What is the formula for the electric and magnetic field produced by one individual charge? It turns out that this is very complicated, and it takes a great deal of study and sophistication to appreciate it. But that is not the point. We write down the law now only to impress the reader with the beauty of nature, so to speak, i.e., that it is possible to summarize all the fundamental knowledge on one page, with notations that he is now familiar with. This law for the fields of an individual charge is complete and accurate, so far as we know (except for quantum mechanics) but it looks rather complicated. We shall not study all the pieces now; we only write it down to give an impression, to show that it can be written, and so that we can see ahead of time roughly what it looks like. As a matter of fact, the most useful way to write the correct laws of electricity and magnetism is not the way we shall now write them, but involves what are called field equations, which we shall learn about next year. But the mathematical notations for these are different and new, and so we write the law in an inconvenient form for calculation, but in notations that we now know. The electric field, $\FLPE$, is given by \begin{equation} \label{Eq:I:28:3} \FLPE=\frac{-q}{4\pi\epsO}\biggl[ \frac{\FLPe_{r'}}{r'^2}+\frac{r'}{c}\,\ddt{}{t}\biggl( \frac{\FLPe_{r'}}{r'^2}\biggr)+\frac{1}{c^2}\,\frac{d^2}{dt^2}\,\FLPe_{r'} \biggr]. \end{equation} What do the various terms tell us? Take the first term, $\FLPE=-q\FLPe_{r'}/4\pi\epsO r'^2$. That, of course, is Coulomb’s law, which we already know: $q$ is the charge that is producing the field; $\FLPe_{r}$ is the unit vector in the direction from the point $P$ where $\FLPE$ is measured, $r$ is the distance from $P$ to $q$. But, Coulomb’s law is wrong. The discoveries of the 19th century showed that influences cannot travel faster than a certain fundamental speed $c$, which we now call the speed of light. It is not correct that the first term is Coulomb’s law, not only because it is not possible to know where the charge is now and at what distance it is now, but also because the only thing that can affect the field at a given place and time is the behavior of the charges in the past. How far in the past? The time delay, or retarded time, so-called, is the time it takes, at speed $c$, to get from the charge to the field point $P$. The delay is $r'/c$. So to allow for this time delay, we put a little prime on $r$, meaning how far away it was when the information now arriving at $P$ left $q$. Just for a moment suppose that the charge carried a light, and that the light could only come to $P$ at the speed $c$. Then when we look at $q$, we would not see where it is now, of course, but where it was at some earlier time. What appears in our formula is the apparent direction $\FLPe_{r'}$—the direction it used to be—the so-called retarded direction—and at the retarded distance $r'$. That would be easy enough to understand, too, but it is also wrong. The whole thing is much more complicated. There are several more terms. The next term is as though nature were trying to allow for the fact that the effect is retarded, if we might put it very crudely. It suggests that we should calculate the delayed Coulomb field and add a correction to it, which is its rate of change times the time delay that we use. Nature seems to be attempting to guess what the field at the present time is going to be, by taking the rate of change and multiplying by the time that is delayed. But we are not yet through. There is a third term—the second derivative, with respect to $t$, of the unit vector in the direction of the charge. Now the formula is finished, and that is all there is to the electric field from an arbitrarily moving charge. The magnetic field is given by \begin{equation} \label{Eq:I:28:4} \FLPB=-\FLPe_{r'}\times\FLPE/c. \end{equation} We have written these down only for the purpose of showing the beauty of nature or, in a way, the power of mathematics. We do not pretend to understand why it is possible to write so much in such a small space, but (28.3) and (28.4) contain the machinery by which electric generators work, how light operates, all the phenomena of electricity and magnetism. Of course, to complete the story we also need to know something about the behavior of the materials involved—the properties of matter—which are not described properly by (28.3). To finish with our description of the world of the 19th century we must mention one other great synthesis which occurred in that century, one with which Maxwell had a great deal to do also, and that was the synthesis of the phenomena of heat and mechanics. We shall study that subject soon. What had to be added in the 20th century was that the dynamical laws of Newton were found to be all wrong, and quantum mechanics had to be introduced to correct them. Newton’s laws are approximately valid when the scale of things is sufficiently large. These quantum-mechanical laws, combined with the laws of electricity, have only recently been combined to form a set of laws called quantum electrodynamics. In addition, there were discovered a number of new phenomena, of which the first was radioactivity, discovered by Becquerel in 1896—he just sneaked it in under the 19th century. This phenomenon of radioactivity was followed up to produce our knowledge of nuclei and new kinds of forces that are not gravitational and not electrical, but new particles with different interactions, a subject which has still not been unravelled. For those purists who know more (the professors who happen to be reading this), we should add that when we say that (28.3) is a complete expression of the knowledge of electrodynamics, we are not being entirely accurate. There was a problem that was not quite solved at the end of the 19th century. When we try to calculate the field from all the charges including the charge itself that we want the field to act on, we get into trouble trying to find the distance, for example, of a charge from itself, and dividing something by that distance, which is zero. The problem of how to handle the part of this field which is generated by the very charge on which we want the field to act is not yet solved today. So we leave it there; we do not have a complete solution to that puzzle yet, and so we shall avoid the puzzle for as long as we can.
1
28
Electromagnetic Radiation
2
Radiation
That, then, is a summary of the world picture. Now let us use it to discuss the phenomena called radiation. To discuss these phenomena, we must select from Eq. (28.3) only that piece which varies inversely as the distance and not as the square of the distance. It turns out that when we finally do find that piece, it is so simple in its form that it is legitimate to study optics and electrodynamics in an elementary way by taking it as “the law” of the electric field produced by a moving charge far away. We shall take it temporarily as a given law which we will learn about in detail next year. Of the terms appearing in (28.3), the first one evidently goes inversely as the square of the distance, and the second is only a correction for delay, so it is easy to show that both of them vary inversely as the square of the distance. All of the effects we are interested in come from the third term, which is not very complicated, after all. What this term says is: look at the charge and note the direction of the unit vector (we can project the end of it onto the surface of a unit sphere). As the charge moves around, the unit vector wiggles, and the acceleration of that unit vector is what we are looking for. That is all. Thus \begin{equation} \label{Eq:I:28:5} \FLPE=\frac{-q}{4\pi\epsO c^2}\, \frac{d^2\FLPe_{r'}}{dt^2} \end{equation} is a statement of the laws of radiation, because that is the only important term when we get far enough away that the fields are varying inversely as the distance. (The parts that go as the square have fallen off so much that we are not interested in them.) Now we can go a little bit further in studying (28.5) to see what it means. Suppose a charge is moving in any manner whatsoever, and we are observing it from a distance. We imagine for a moment that in a sense it is “lit up” (although it is light that we are trying to explain); we imagine it as a little white dot. Then we would see this white dot running around. But we don’t see exactly how it is running around right now, because of the delay that we have been talking about. What counts is how it was moving earlier. The unit vector $\FLPe_{r'}$ is pointed toward the apparent position of the charge. Of course, the end of $\FLPe_{r'}$ goes on a slight curve, so that its acceleration has two components. One is the transverse piece, because the end of it goes up and down, and the other is a radial piece because it stays on a sphere. It is easy to demonstrate that the latter is much smaller and varies as the inverse square of $r$ when $r$ is very great. This is easy to see, for when we imagine that we move a given source farther and farther away, then the wigglings of $\FLPe_{r'}$ look smaller and smaller, inversely as the distance, but the radial component of acceleration is varying much more rapidly than inversely as the distance. So for practical purposes all we have to do is project the motion on a plane at unit distance. Therefore we find the following rule: Imagine that we look at the moving charge and that everything we see is delayed—like a painter trying to paint a scene on a screen at a unit distance. A real painter, of course, does not take into account the fact that light is going at a certain speed, but paints the world as he sees it. We want to see what his picture would look like. So we see a dot, representing the charge, moving about in the picture. The acceleration of that dot is proportional to the electric field. That is all—all we need. Thus Eq. (28.5) is the complete and correct formula for radiation; even relativity effects are all contained in it. However, we often want to apply it to a still simpler circumstance in which the charges are moving only a small distance at a relatively slow rate. Since they are moving slowly, they do not move an appreciable distance from where they start, so that the delay time is practically constant. Then the law is still simpler, because the delay time is fixed. Thus we imagine that the charge is executing a very tiny motion at an effectively constant distance. The delay at the distance $r$ is $r/c$. Then our rule becomes the following: If the charged object is moving in a very small motion and it is laterally displaced by the distance $x(t)$, then the angle that the unit vector $\FLPe_{r'}$ is displaced is $x/r$, and since $r$ is practically constant, the $x$-component of $d^2\FLPe_{r'}/dt^2$ is simply the acceleration of $x$ itself at an earlier time divided by $r$, and so finally we get the law we want, which is \begin{equation} \label{Eq:I:28:6} E_x(t)=\frac{-q}{4\pi\epsO c^2r}\,a_x \Bigl(t-\frac{r}{c}\Bigr). \end{equation} Only the component $a_x$, perpendicular to the line of sight, is important. Let us see why that is. Evidently, if the charge is moving in and out straight at us, the unit vector in that direction does not wiggle at all, and it has no acceleration. So it is only the sidewise motion which is important, only the acceleration that we see projected on the screen.
1
28
Electromagnetic Radiation
3
The dipole radiator
As our fundamental “law” of electromagnetic radiation, we are going to assume that (28.6) is true, i.e., that the electric field produced by an accelerating charge which is moving nonrelativistically at a very large distance $r$ approaches that form. The electric field varies inversely as $r$ and is proportional to the acceleration of the charge, projected onto the “plane of sight,” and this acceleration is not today’s acceleration, but the acceleration that it had at an earlier time, the amount of delay being a time, $r/c$. In the remainder of this chapter we shall discuss this law so that we can understand it better physically, because we are going to use it to understand all of the phenomena of light and radio propagation, such as reflection, refraction, interference, diffraction, and scattering. It is the central law, and is all we need. All the rest of Eq. (28.3) was written down only to set the stage, so that we could appreciate where (28.6) fits and how it comes about. We shall discuss (28.3) further next year. In the meantime, we shall accept it as true, but not just on a theoretical basis. We may devise a number of experiments which illustrate the character of the law. In order to do so, we need an accelerating charge. It should be a single charge, but if we can make a great many charges move together, all the same way, we know that the field will be the sum of the effects of each of the individual charges; we just add them together. As an example, consider two pieces of wire connected to a generator, as shown in Fig. 28–1. The idea is that the generator makes a potential difference, or a field, which pulls electrons away from piece $A$ and pushes them into $B$ at one moment, and then, an infinitesimal time later, it reverses the effect and pulls the electrons out of $B$ and pumps them back into $A$! So in these two wires charges, let us say, are accelerating upward in wire $A$ and upward in wire $B$ for one moment, and a moment later they are accelerating downward in wire $A$ and downward in wire $B$. The fact that we need two wires and a generator is merely that this is a way of doing it. The net result is that we merely have a charge accelerating up and down as though $A$ and $B$ were one single wire. A wire that is very short compared with the distance light travels in one oscillation period is called an electric dipole oscillator. Thus we have the circumstance that we need to apply our law, which tells us that this charge makes an electric field, and so we need an instrument to detect an electric field, and the instrument we use is the same thing—a pair of wires like $A$ and $B$! If an electric field is applied to such a device, it will produce a force which will pull the electrons up on both wires or down on both wires. This signal is detected by means of a rectifier mounted between $A$ and $B$, and a tiny, fine wire carries the information into an amplifier, where it is amplified so we can hear the audiofrequency tone with which the radiofrequency is modulated. When this probe feels an electric field, there will be a loud noise coming out of the loudspeaker, and when there is no electric field driving it, there will be no noise. Because the room in which the waves we are measuring has other objects in it, our electric field will shake electrons in these other objects; the electric field makes these other charges go up and down, and in going up and down, these also produce an effect on our probe. Thus for a successful experiment we must hold things fairly close together, so that the influences from the walls and from ourselves—the reflected waves—are relatively small. So the phenomena will not turn out to appear to be precisely and perfectly in accord with Eq. (28.6), but will be close enough that we shall be able to appreciate the law. Now we turn the generator on and hear the audio signal. We find a strong field when the detector $D$ is parallel to the generator $G$ at point $1$ (Fig. 28–2). We find the same amount of field also at any other azimuth angle about the axis of $G$, because it has no directional effects. On the other hand, when the detector is at $3$ the field is zero. That is all right, because our formula said that the field should be the acceleration of the charge projected perpendicular to the line of sight. Therefore when we look down on $G$, the charge is moving toward and away from $D$, and there is no effect. So that checks the first rule, that there is no effect when the charge is moving directly toward us. Secondly, the formula says that the electric field should be perpendicular to $r$ and in the plane of $G$ and $r$; so if we put $D$ at $1$ but rotate it $90^\circ$, we should get no signal. And this is just what we find, the electric field is indeed vertical, and not horizontal. When we move $D$ to some intermediate angle, we see that the strongest signal occurs when it is oriented as shown, because although $G$ is vertical, it does not produce a field that is simply parallel to itself—it is the projection of the acceleration perpendicular to the line of sight that counts. The signal is weaker at $2$ than it is at $1$, because of the projection effect.
1
28
Electromagnetic Radiation
4
Interference
Next, we may test what happens when we have two sources side by side several wavelengths apart (Fig. 28–3). The law is that the two sources should add their effects at point $1$ when both of the sources are connected to the same generator and are both moving up and down the same way, so that the total electric field is the sum of the two and is twice as strong as it was before. Now comes an interesting possibility. Suppose we make the charges in $S_1$ and $S_2$ both accelerate up and down, but delay the timing of $S_2$ so that they are $180^\circ$ out of phase. Then the field produced by $S_1$ will be in one direction and the field produced by $S_2$ will be in the opposite direction at any instant, and therefore we should get no effect at point $1$. The phase of oscillation is neatly adjustable by means of a pipe which is carrying the signal to $S_2$. By changing the length of this pipe we change the time it takes the signal to arrive at $S_2$ and thus we change the phase of that oscillation. By adjusting this length, we can indeed find a place where there is no more signal left, in spite of the fact that both $S_1$ and $S_2$ are moving! The fact that they are both moving can be checked, because if we cut one out, we can see the motion of the other. So the two of them together can produce zero if everything is adjusted correctly. Now, it is very interesting to show that the addition of the two fields is in fact a vector addition. We have just checked it for up and down motion, but let us check two nonparallel directions. First, we restore $S_1$ and $S_2$ to the same phase; that is, they are again moving together. But now we turn $S_1$ through $90^\circ$, as shown in Fig. 28–4. Now we should have at point $1$ the sum of two effects, one of which is vertical and the other horizontal. The electric field is the vector sum of these two in-phase signals—they are both strong at the same time and go through zero together; the total field should be a signal $R$ at $45^\circ$. If we turn $D$ to get the maximum noise, it should be at about $45^\circ$, and not vertical. And if we turn it at right angles to that direction, we should get zero, which is easy to measure. Indeed, we observe just such behavior! Now, how about the retardation? How can we demonstrate that the signal is retarded? We could, with a great deal of equipment, measure the time at which it arrives, but there is another, very simple way. Referring again to Fig. 28–3, suppose that $S_1$ and $S_2$ are in phase. They are both shaking together, and they produce equal electric fields at point $1$. But suppose we go to a certain place $2$ which is closer to $S_2$ and farther from $S_1$. Then, in accordance with the principle that the acceleration should be retarded by an amount equal to $r/c$, if the retardations are not equal, the signals are no longer in phase. Thus it should be possible to find a position at which the distances of $D$ from $S_1$ and $S_2$ differ by some amount $\Delta$, in such a manner that there is no net signal. That is, the distance $\Delta$ is to be the distance light goes in one-half an oscillation of the generator. We may go still further, and find a point where the difference is greater by a whole cycle; that is to say, the signal from the first antenna reaches point $3$ with a delay in time that is greater than that of the second antenna by just the length of time it takes for the electric current to oscillate once, and therefore the two electric fields produced at $3$ are in phase again. At point $3$ the signal is strong again. This completes our discussion of the experimental verification of some of the important features of Eq. (28.6). Of course we have not really checked the $1/r$ variation of the electric field strength, or the fact that there is also a magnetic field that goes along with the electric field. To do so would require rather sophisticated techniques and would hardly add to our understanding at this point. In any case, we have checked those features that are of the greatest importance for our later applications, and we shall come back to study some of the other properties of electromagnetic waves next year.
1
29
Interference
1
Electromagnetic waves
In this chapter we shall discuss the subject of the preceding chapter more mathematically. We have qualitatively demonstrated that there are maxima and minima in the radiation field from two sources, and our problem now is to describe the field in mathematical detail, not just qualitatively. We have already physically analyzed the meaning of formula (28.6) quite satisfactorily, but there are a few points to be made about it mathematically. In the first place, if a charge is accelerating up and down along a line, in a motion of very small amplitude, the field at some angle $\theta$ from the axis of the motion is in a direction at right angles to the line of sight and in the plane containing both the acceleration and the line of sight (Fig. 29–1). If the distance is called $r$, then at time $t$ the electric field has the magnitude \begin{equation} \label{Eq:I:29:1} E(t)=\frac{-qa(t-r/c)\sin\theta}{4\pi\epsO c^2r}, \end{equation} where $a(t - r/c)$ is the acceleration at the time $(t - r/c)$, called the retarded acceleration. Now it would be interesting to draw a picture of the field under different conditions. The thing that is interesting, of course, is the factor $a(t - r/c)$, and to understand it we can take the simplest case, $\theta = 90^\circ$, and plot the field graphically. What we had been thinking of before is that we stand in one position and ask how the field there changes with time. But instead of that, we are now going to see what the field looks like at different positions in space at a given instant. So what we want is a “snapshot” picture which tells us what the field is in different places. Of course it depends upon the acceleration of the charge. Suppose that the charge at first had some particular motion: it was initially standing still, and it suddenly accelerated in some manner, as shown in Fig. 29–2, and then stopped. Then, a little bit later, we measure the field at a different place. Then we may assert that the field will appear as shown in Fig. 29–3. At each point the field is determined by the acceleration of the charge at an earlier time, the amount earlier being the delay $r/c$. The field at farther and farther points is determined by the acceleration at earlier and earlier times. So the curve in Fig. 29–3 is really, in a sense, a “reversed” plot of the acceleration as a function of time; the distance is related to time by a constant scale factor $c$, which we often take as unity. This is easily seen by considering the mathematical behavior of $a(t - r/c)$. Evidently, if we add a little time $\Delta t$, we get the same value for $a(t - r/c)$ as we would have if we had subtracted a little distance: $\Delta r = -c\,\Delta t$. Stated another way: if we add a little time $\Delta t$, we can restore $a(t - r/c)$ to its former value by adding a little distance $\Delta r = c\,\Delta t$. That is, as time goes on the field moves as a wave outward from the source. That is the reason why we sometimes say light is propagated as waves. It is equivalent to saying that the field is delayed, or to saying that the electric field is moving outward as time goes on. An interesting special case is that where the charge $q$ is moving up and down in an oscillatory manner. The case which we studied experimentally in the last chapter was one in which the displacement $x$ at any time $t$ was equal to a certain constant $x_0$, the magnitude of the oscillation, times $\cos\omega t$. Then the acceleration is \begin{equation} \label{Eq:I:29:2} a=-\omega^2x_0\cos\omega t=a_0\cos\omega t, \end{equation} where $a_0$ is the maximum acceleration, $-\omega^2x_0$. Putting this formula into (29.1), we find \begin{equation} \label{Eq:I:29:3} E=-q\sin\theta\, \frac{a_0\cos\omega(t-r/c)}{4\pi\epsO rc^2}. \end{equation} Now, ignoring the angle $\theta$ and the constant factors, let us see what that looks like as a function of position or as a function of time.
1
29
Interference
2
Energy of radiation
First of all, at any particular moment or in any particular place, the strength of the field varies inversely as the distance $r$, as we mentioned previously. Now we must point out that the energy content of a wave, or the energy effects that such an electric field can have, are proportional to the square of the field, because if, for instance, we have some kind of a charge or an oscillator in the electric field, then if we let the field act on the oscillator, it makes it move. If this is a linear oscillator, the acceleration, velocity, and displacement produced by the electric field acting on the charge are all proportional to the field. So the kinetic energy which is developed in the charge is proportional to the square of the field. So we shall take it that the energy that a field can deliver to a system is proportional somehow to the square of the field. This means that the energy that the source can deliver decreases as we get farther away; in fact, it varies inversely as the square of the distance. But that has a very simple interpretation: if we wanted to pick up all the energy we could from the wave in a certain cone at a distance $r_1$ (Fig. 29–4), and we do the same at another distance $r_2$, we find that the amount of energy per unit area at any one place goes inversely as the square of $r$, but the area of the surface intercepted by the cone goes directly as the square of $r$. So the energy that we can take out of the wave within a given conical angle is the same, no matter how far away we are! In particular, the total energy that we could take out of the whole wave by putting absorbing oscillators all around is a certain fixed amount. So the fact that the amplitude of $E$ varies as $1/r$ is the same as saying that there is an energy flux which is never lost, an energy which goes on and on, spreading over a greater and greater effective area. Thus we see that after a charge has oscillated, it has lost some energy which it can never recover; the energy keeps going farther and farther away without diminution. So if we are far enough away that our basic approximation is good enough, the charge cannot recover the energy which has been, as we say, radiated away. Of course the energy still exists somewhere, and is available to be picked up by other systems. We shall study this energy “loss” further in Chapter 32. Let us now consider more carefully how the wave (29.3) varies as a function of time at a given place, and as a function of position at a given time. Again we ignore the $1/r$ variation and the constants.
1
29
Interference
3
Sinusoidal waves
First let us fix the position $r$, and watch the field as a function of time. It is oscillatory at the angular frequency $\omega$. The angular frequency $\omega$ can be defined as the rate of change of phase with time (radians per second). We have already studied such a thing, so it should be quite familiar to us by now. The period is the time needed for one oscillation, one complete cycle, and we have worked that out too; it is $2\pi/\omega$, because $\omega$ times the period is one cycle of the cosine. Now we introduce a new quantity which is used a great deal in physics. This has to do with the opposite situation, in which we fix $t$ and look at the wave as a function of distance $r$. Of course we notice that, as a function of $r$, the wave (29.3) is also oscillatory. That is, aside from $1/r$, which we are ignoring, we see that $E$ oscillates as we change the position. So, in analogy with $\omega$, we can define a quantity called the wave number, symbolized as $k$. This is defined as the rate of change of phase with distance (radians per meter). That is, as we move in space at a fixed time, the phase changes. There is another quantity that corresponds to the period, and we might call it the period in space, but it is usually called the wavelength, symbolized $\lambda$. The wavelength is the distance occupied by one complete cycle. It is easy to see, then, that the wavelength is $2\pi/k$, because $k$ times the wavelength would be the number of radians that the whole thing changes, being the product of the rate of change of the radians per meter, times the number of meters, and we must make a $2\pi$ change for one cycle. So $k\lambda = 2\pi$ is exactly analogous to $\omega t_0 = 2\pi$. Now in our particular wave there is a definite relationship between the frequency and the wavelength, but the above definitions of $k$ and $\omega$ are actually quite general. That is, the wavelength and the frequency may not be related in the same way in other physical circumstances. However, in our circumstance the rate of change of phase with distance is easily determined, because if we call $\phi = \omega(t - r/c)$ the phase, and differentiate (partially) with respect to distance $r$, the rate of change, $\ddpl{\phi}{r}$, is \begin{equation} \label{Eq:I:29:4} \biggl|\ddp{\phi}{r}\biggr|= k = \frac{\omega}{c}. \end{equation} There are many ways to represent the same thing, such as \begin{align} \label{Eq:I:29:5} \lambda &= ct_0 &\\[1.5ex] \label{Eq:I:29:6} \omega &= ck & \end{align} \begin{align} \label{Eq:I:29:7} \lambda\nu &= c &\\[1.5ex] \label{Eq:I:29:8} \omega\lambda &= 2\pi c & \end{align} Why is the wavelength equal to $c$ times the period? That’s very easy, of course, because if we sit still and wait for one period to elapse, the waves, travelling at the speed $c$, will move a distance $ct_0$, and will of course have moved over just one wavelength. In a physical situation other than that of light, $k$ is not necessarily related to $\omega$ in this simple way. If we call the distance along an axis $x$, then the formula for a cosine wave moving in a direction $x$ with a wave number $k$ and an angular frequency $\omega$ will be written in general as $\cos\,(\omega t - kx)$. Now that we have introduced the idea of wavelength, we may say something more about the circumstances in which (29.1) is a legitimate formula. We recall that the field is made up of several pieces, one of which varies inversely as $r$, another part which varies inversely as $r^2$, and others which vary even faster. It would be worthwhile to know in what circumstances the $1/r$ part of the field is the most important part, and the other parts are relatively small. Naturally, the answer is “if we go ‘far enough’ away,” because terms which vary inversely as the square ultimately become negligible compared with the $1/r$ term. How far is “far enough”? The answer is, qualitatively, that the other terms are of order $\lambda/r$ smaller than the $1/r$ term. Thus, so long as we are beyond a few wavelengths, (29.1) is an excellent approximation to the field. Sometimes the region beyond a few wavelengths is called the “wave zone.”
1
29
Interference
4
Two dipole radiators
Next let us discuss the mathematics involved in combining the effects of two oscillators to find the net field at a given point. This is very easy in the few cases that we considered in the previous chapter. We shall first describe the effects qualitatively, and then more quantitatively. Let us take the simple case, where the oscillators are situated with their centers in the same horizontal plane as the detector, and the line of vibration is vertical. Figure 29–5(a) represents the top view of two such oscillators, and in this particular example they are half a wavelength apart in a N–S direction, and are oscillating together in the same phase, which we call zero phase. Now we would like to know the intensity of the radiation in various directions. By the intensity we mean the amount of energy that the field carries past us per second, which is proportional to the square of the field, averaged in time. So the thing to look at, when we want to know how bright the light is, is the square of the electric field, not the electric field itself. (The electric field tells the strength of the force felt by a stationary charge, but the amount of energy that is going past, in watts per square meter, is proportional to the square of the electric field. We shall derive the constant of proportionality in Chapter 31.) If we look at the array from the W side, both oscillators contribute equally and in phase, so the electric field is twice as strong as it would be from a single oscillator. Therefore the intensity is four times as strong as it would be if there were only one oscillator. (The numbers in Fig. 29–5 represent how strong the intensity would be in this case, compared with what it would be if there were only a single oscillator of unit strength.) Now, in either the N or S direction along the line of the oscillators, since they are half a wavelength apart, the effect of one oscillator turns out to be out of phase by exactly half an oscillation from the other, and therefore the fields add to zero. At a certain particular intermediate angle (in fact, at $30^\circ$) the intensity is $2$, and it falls off, $4$, $2$, $0$, and so forth. We have to learn how to find these numbers at other angles. It is a question of adding two oscillations with different phases. Let us quickly look at some other cases of interest. Suppose the oscillators are again one-half a wavelength apart, but the phase $\alpha$ of one is set half a period behind the other in its oscillation (Fig. 29–5b). In the W direction the intensity is now zero, because one oscillator is “pushing” when the other one is “pulling.” But in the N direction the signal from the near one comes at a certain time, and that of the other comes half a period later. But the latter was originally half a period behind in timing, and therefore it is now exactly in time with the first one, and so the intensity in this direction is $4$ units. The intensity in the direction at $30^\circ$ is still $2$, as we can prove later. Now we come to an interesting case which shows up a possibly useful feature. Let us remark that one of the reasons that phase relations of oscillators are interesting is for beaming radio transmitters. For instance, if we build an antenna system and want to send a radio signal, say, to Hawaii, we set the antennas up as in Fig. 29–5(a) and we broadcast with our two antennas in phase, because Hawaii is to the west of us. Then we decide that tomorrow we are going to broadcast toward Alberta, Canada. Since that is north, not west, all we have to do is to reverse the phase of one of our antennas, and we can broadcast to the north. So we can build antenna systems with various arrangements. Ours is one of the simplest possible ones; we can make them much more complicated, and by changing the phases in the various antennas we can send the beams in various directions and send most of the power in the direction in which we wish to transmit, without ever moving the antenna! In both of the preceding cases, however, while we are broadcasting toward Alberta we are wasting a lot of power on Easter Island, and it would be interesting to ask whether it is possible to send it in only one direction. At first sight we might think that with a pair of antennas of this nature the result is always going to be symmetrical. So let us consider a case that comes out unsymmetrical, to show the possible variety. If the antennas are separated by one-quarter wavelength, and if the N one is one-fourth period behind the S one in time, then what happens (Fig. 29–6)? In the W direction we get $2$, as we will see later. In the S direction we get zero, because the signal from S comes at a certain time; that from N comes $90^\circ$ later in time, but it is already $90^\circ$ behind in its built-in phase, therefore it arrives, altogether, $180^\circ$ out of phase, and there is no effect. On the other hand, in the N direction, the N signal arrives earlier than the S signal by $90^\circ$ in time, because it is a quarter wavelength closer. But its phase is set so that it is oscillating $90^\circ$ behind in time, which just compensates the delay difference, and therefore the two signals appear together in phase, making the field strength twice as large, and the energy four times as great. Thus, by using some cleverness in spacing and phasing our antennas, we can send the power all in one direction. But still it is distributed over a great range of angles. Can we arrange it so that it is focused still more sharply in a particular direction? Let us consider the case of Hawaii again, where we are sending the beam east and west but it is spread over quite an angle, because even at $30^\circ$ we are still getting half the intensity—we are wasting the power. Can we do better than that? Let us take a situation in which the separation is ten wavelengths (Fig. 29–7), which is more nearly comparable to the situation in which we experimented in the previous chapter, with separations of several wavelengths rather than a small fraction of a wavelength. Here the picture is quite different. If the oscillators are ten wavelengths apart (we take the in-phase case to make it easy), we see that in the E–W direction, they are in phase, and we get a strong intensity, four times what we would get if one of them were there alone. On the other hand, at a very small angle away, the arrival times differ by $180^\circ$ and the intensity is zero. To be precise, if we draw a line from each oscillator to a distant point and the difference $\Delta$ in the two distances is $\lambda/2$, half an oscillation, then they will be out of phase. So this first null occurs when that happens. (The figure is not drawn to scale; it is only a rough sketch.) This means that we do indeed have a very sharp beam in the direction we want, because if we just move over a little bit we lose all our intensity. Unfortunately for practical purposes, if we were thinking of making a radio broadcasting array and we doubled the distance $\Delta$, then we would be a whole cycle out of phase, which is the same as being exactly in phase again! Thus we get many successive maxima and minima, just as we found with the $2\tfrac{1}{2}\lambda$ spacing in Chapter 28. Now how can we arrange to get rid of all these extra maxima, or “lobes,” as they are called? We could get rid of the unwanted lobes in a rather interesting way. Suppose that we were to place another set of antennas between the two that we already have. That is, the outside ones are still $10\lambda$ apart, but between them, say every $2\lambda$, we have put another antenna, and we drive them all in phase. There are now six antennas, and if we looked at the intensity in the E–W direction, it would, of course, be much higher with six antennas than with one. The field would be six times and the intensity thirty-six times as great (the square of the field). We get $36$ units of intensity in that direction. Now if we look at neighboring points, we find a zero as before, roughly, but if we go farther, to where we used to get a big “bump,” we get a much smaller “bump” now. Let us try to see why. The reason is that although we might expect to get a big bump when the distance $\Delta$ is exactly equal to the wavelength, it is true that dipoles $1$ and $6$ are then in phase and are cooperating in trying to get some strength in that direction. But numbers $3$ and $4$ are roughly $\tfrac{1}{2}$ a wavelength out of phase with $1$ and $6$, and although $1$ and $6$ push together, $3$ and $4$ push together too, but in opposite phase. Therefore there is very little intensity in this direction—but there is something; it does not balance exactly. This kind of thing keeps on happening; we get very little bumps, and we have the strong beam in the direction where we want it. But in this particular example, something else will happen: namely, since the distance between successive dipoles is $2\lambda$, it is possible to find an angle where the distance $\delta$ between successive dipoles is exactly one wavelength, so that the effects from all of them are in phase again. Each one is delayed relative to the next one by $360^\circ$, so they all come back in phase, and we have another strong beam in that direction! It is easy to avoid this in practice because it is possible to put the dipoles closer than one wavelength apart. If we put in more antennas, closer than one wavelength apart, then this cannot happen. But the fact that this can happen at certain angles, if the spacing is bigger than one wavelength, is a very interesting and useful phenomenon in other applications—not radio broadcasting, but in diffraction gratings.
1
29
Interference
5
The mathematics of interference
Now we have finished our analysis of the phenomena of dipole radiators qualitatively, and we must learn how to analyze them quantitatively. To find the effect of two sources at some particular angle in the most general case, where the two oscillators have some intrinsic relative phase $\alpha$ from one another and the strengths $A_1$ and $A_2$ are not equal, we find that we have to add two cosines having the same frequency, but with different phases. It is very easy to find this phase difference; it is made up of a delay due to the difference in distance, and the intrinsic, built-in phase of the oscillation. Mathematically, we have to find the sum $R$ of two waves: $R = A_1 \cos\,(\omega t + \phi_1) + A_2 \cos\,(\omega t + \phi_2)$. How do we do it? It is really very easy, and we presume that we already know how to do it. However, we shall outline the procedure in some detail. First, we can, if we are clever with mathematics and know enough about cosines and sines, simply work it out. The easiest such case is the one where $A_1$ and $A_2$ are equal, let us say they are both equal to $A$. In those circumstances, for example (we could call this the trigonometric method of solving the problem), we have \begin{equation} \label{Eq:I:29:9} R = A[\cos\,(\omega t+\phi_1)+\cos\,(\omega t + \phi_2)]. \end{equation} Once, in our trigonometry class, we may have learned the rule that \begin{equation} \label{Eq:I:29:10} \cos A+\cos B=2\cos\tfrac{1}{2}(A+B)\cos\tfrac{1}{2}(A-B). \end{equation} If we know that, then we can immediately write $R$ as \begin{equation} \label{Eq:I:29:11} R=2A\cos\tfrac{1}{2}(\phi_1-\phi_2)\cos\,(\omega t+\tfrac{1}{2}\phi_1+ \tfrac{1}{2}\phi_2). \end{equation} So we find that we have an oscillatory wave with a new phase and a new amplitude. In general, the result will be an oscillatory wave with a new amplitude $A_R$, which we may call the resultant amplitude, oscillating at the same frequency but with a phase difference $\phi_R$, called the resultant phase. In view of this, our particular case has the following result: that the resultant amplitude is \begin{equation} \label{Eq:I:29:12} A_R=2A\cos\tfrac{1}{2}(\phi_1-\phi_2), \end{equation} and the resultant phase is the average of the two phases, and we have completely solved our problem. Now suppose that we cannot remember that the sum of two cosines is twice the cosine of half the sum times the cosine of half the difference. Then we may use another method of analysis which is more geometrical. Any cosine function of $\omega t$ can be considered as the horizontal projection of a rotating vector. Suppose there were a vector $\FLPA_1$ of length $A_1$ rotating with time, so that its angle with the horizontal axis is $\omega t + \phi_1$. (We shall leave out the $\omega t$ in a minute, and see that it makes no difference.) Suppose that we take a snapshot at the time $t = 0$, although, in fact, the picture is rotating with angular velocity $\omega$ (Fig. 29–9). The projection of $\FLPA_1$ along the horizontal axis is precisely $A_1\cos\,(\omega t + \phi_1)$. Now at $t= 0$ the second wave could be represented by another vector, $\FLPA_2$, of length $A_2$ and at an angle $\phi_2$, and also rotating. They are both rotating with the same angular velocity $\omega$, and therefore the relative positions of the two are fixed. The system goes around like a rigid body. The horizontal projection of $\FLPA_2$ is $A_2\cos\,(\omega t + \phi_2)$. But we know from the theory of vectors that if we add the two vectors in the ordinary way, by the parallelogram rule, and draw the resultant vector $\FLPA_R$, the $x$-component of the resultant is the sum of the $x$-components of the other two vectors. That solves our problem. It is easy to check that this gives the correct result for the special case we treated above, where $A_1=$ $A_2 =$ $A$. In this case, we see from Fig. 29–9 that $\FLPA_R$ lies midway between $\FLPA_1$ and $\FLPA_2$ and makes an angle $\tfrac{1}{2}(\phi_2 - \phi_1)$ with each. Therefore we see that $A_R = 2A\cos\tfrac{1}{2}(\phi_2 - \phi_1)$, as before. Also, as we see from the triangle, the phase of $\FLPA_R$, as it goes around, is the average angle of $\FLPA_1$ and $\FLPA_2$ when the two amplitudes are equal. Clearly, we can also solve for the case where the amplitudes are not equal, just as easily. We can call that the geometrical way of solving the problem. There is still another way of solving the problem, and that is the analytical way. That is, instead of having actually to draw a picture like Fig. 29–9, we can write something down which says the same thing as the picture: instead of drawing the vectors, we write a complex number to represent each of the vectors. The real parts of the complex numbers are the actual physical quantities. So in our particular case the waves could be written in this way: $A_1e^{i(\omega t + \phi_1)}$ [the real part of this is $A_1\cos\,(\omega t + \phi_1)$] and $A_2e^{i(\omega t + \phi_2)}$. Now we can add the two: \begin{equation} \label{Eq:I:29:13} R=A_1e^{i(\omega t + \phi_1)}+A_2e^{i(\omega t + \phi_2)}= (A_1e^{i\phi_1}+A_2e^{i\phi_2})e^{i\omega t} \end{equation} \begin{align} R&=A_1e^{i(\omega t + \phi_1)}+A_2e^{i(\omega t + \phi_2)}\notag\\[1ex] &=(A_1e^{i\phi_1}+A_2e^{i\phi_2})e^{i\omega t} \label{Eq:I:29:13} \end{align} or \begin{equation} \label{Eq:I:29:14} \hat{R}=A_1e^{i\phi_1}+A_2e^{i\phi_2}=A_Re^{i\phi_R}. \end{equation} This solves the problem that we wanted to solve, because it represents the result as a complex number of magnitude $A_R$ and phase $\phi_R$. To see how this method works, let us find the amplitude $A_R$ which is the “length” of $\hat{R}$. To get the “length” of a complex quantity, we always multiply the quantity by its complex conjugate, which gives the length squared. The complex conjugate is the same expression, but with the sign of the $i$’s reversed. Thus we have \begin{equation} \label{Eq:I:29:15} A_R^2=(A_1e^{i\phi_1}+A_2e^{i\phi_2})(A_1e^{-i\phi_1}+A_2e^{-i\phi_2}). \end{equation} In multiplying this out, we get $A_1^2 + A_2^2$ (here the $e$’s cancel), and for the cross terms we have \begin{equation*} A_1A_2(e^{i(\phi_1-\phi_2)}+e^{i(\phi_2-\phi_1)}). \end{equation*} Now \begin{equation*} e^{i\theta}+e^{-i\theta}= \cos\theta+i\sin\theta+\cos\theta-i\sin\theta. \end{equation*} That is to say, $e^{i\theta} + e^{-i\theta} = 2\cos\theta$. Our final result is therefore \begin{equation} \label{Eq:I:29:16} A_R^2=A_1^2+A_2^2+2A_1A_2\cos\,(\phi_2-\phi_1). \end{equation} As we see, this agrees with the length of $\FLPA_R$ in Fig. 29–9, using the rules of trigonometry. Thus the sum of the two effects has the intensity $A_1^2$ we would get with one of them alone, plus the intensity $A_2^2$ we would get with the other one alone, plus a correction. This correction we call the interference effect. It is really only the difference between what we get simply by adding the intensities, and what actually happens. We call it interference whether it is positive or negative. (Interference in ordinary language usually suggests opposition or hindrance, but in physics we often do not use language the way it was originally designed!) If the interference term is positive, we call that case constructive interference, horrible though it may sound to anybody other than a physicist! The opposite case is called destructive interference. Now let us see how to apply our general formula (29.16) for the case of two oscillators to the special situations which we have discussed qualitatively. To apply this general formula, it is only necessary to find what phase difference, $\phi_2 - \phi_1$, exists between the signals arriving at a given point. (It depends only on the phase difference, of course, and not on the phase itself.) So let us consider the case where the two oscillators, of equal amplitude, are separated by some distance $d$ and have an intrinsic relative phase $\alpha$. (When one is at phase zero, the phase of the other is $\alpha$.) Then we ask what the intensity will be in some azimuth direction $\theta$ from the E–W line. [Note that this is not the same $\theta$ as appears in (29.1). We are torn between using an unconventional symbol like $\cancel{\text{U}}\!\!,$ or the conventional symbol $\theta$ (Fig. 29–10).] The phase relationship is found by noting that the difference in distance from $P$ to the two oscillators is $d\sin\theta$, so that the phase difference contribution from this is the number of wavelengths in $d\sin\theta$, multiplied by $2\pi$. (Those who are more sophisticated might want to multiply the wave number $k$, which is the rate of change of phase with distance, by $d\sin\theta$; it is exactly the same.) The phase difference due to the distance difference is thus $2\pi d\sin\theta/\lambda$, but, due to the timing of the oscillators, there is an additional phase $\alpha$. So the phase difference at arrival would be \begin{equation} \label{Eq:I:29:17} \phi_2-\phi_1=\alpha+2\pi d\sin\theta/\lambda. \end{equation} This takes care of all the cases. Thus all we have to do is substitute this expression into (29.16) for the case $A_1 = A_2$, and we can calculate all the various results for two antennas of equal intensity. Now let us see what happens in our various cases. The reason we know, for example, that the intensity is $2$ at $30^\circ$ in Fig. 29–5 is the following: the two oscillators are $\tfrac{1}{2}\lambda$ apart, so at $30^\circ$, $d\sin\theta=\lambda/4$. Thus $\phi_2 - \phi_1 =$ $2\pi\lambda/4\lambda =$ $\pi/2$, and so the interference term is zero. (We are adding two vectors at $90^\circ$.) The result is the hypotenuse of a $45^\circ$ right-angle triangle, which is $\sqrt{2}$ times the unit amplitude; squaring it, we get twice the intensity of one oscillator alone. All the other cases can be worked out in this same way.
1
30
Diffraction
1
The resultant amplitude due to $\boldsymbol{n}$ equal oscillators
This chapter is a direct continuation of the previous one, although the name has been changed from Interference to Diffraction. No one has ever been able to define the difference between interference and diffraction satisfactorily. It is just a question of usage, and there is no specific, important physical difference between them. The best we can do, roughly speaking, is to say that when there are only a few sources, say two, interfering, then the result is usually called interference, but if there is a large number of them, it seems that the word diffraction is more often used. So, we shall not worry about whether it is interference or diffraction, but continue directly from where we left off in the middle of the subject in the last chapter. Thus we shall now discuss the situation where there are $n$ equally spaced oscillators, all of equal amplitude but different from one another in phase, either because they are driven differently in phase, or because we are looking at them at an angle such that there is a difference in time delay. For one reason or another, we have to add something like this: \begin{equation} \label{Eq:I:30:1} R=A[\cos\omega t+\cos\,(\omega t+\phi)+\cos\,(\omega t+2\phi)+\dotsb +\cos\,(\omega t+(n-1)\phi)], \end{equation} \begin{align} R=A[\cos\omega t&+\cos\,(\omega t+\phi)+\cos\,(\omega t+2\phi)+\notag\\ \dotsb&+\cos\,(\omega t+(n-1)\phi)], \label{Eq:I:30:1} \end{align} where $\phi$ is the phase difference between one oscillator and the next one, as seen in a particular direction. Specifically, $\phi = \alpha + 2\pi d\sin\theta/\lambda$. Now we must add all the terms together. We shall do this geometrically. The first one is of length $A$, and it has zero phase. The next is also of length $A$ and it has a phase equal to $\phi$. The next one is again of length $A$ and it has a phase equal to $2\phi$, and so on. So we are evidently going around an equiangular polygon with $n$ sides (Fig. 30–1). Now the vertices, of course, all lie on a circle, and we can find the net amplitude most easily if we find the radius of that circle. Suppose that $Q$ is the center of the circle. Then we know that the angle $OQS$ is just a phase angle $\phi$. (This is because the radius $QS$ bears the same geometrical relation to $\FLPA_2$ as $QO$ bears to $\FLPA_1$, so they form an angle $\phi$ between them.) Therefore the radius $r$ must be such that $A= 2r \sin \phi/2$, which fixes $r$. But the large angle $OQT$ is equal to $n\phi$, and we thus find that $A_R = 2r \sin n\phi/2$. Combining these two results to eliminate $r$, we get \begin{equation} \label{Eq:I:30:2} A_R=A\,\frac{\sin n\phi/2}{\sin \phi/2}. \end{equation} The resultant intensity is thus \begin{equation} \label{Eq:I:30:3} I=I_0\,\frac{\sin^2 n\phi/2}{\sin^2 \phi/2}. \end{equation} Now let us analyze this expression and study some of its consequences. In the first place, we can check it for $n = 1$. It checks: $I = I_0$. Next, we check it for $n = 2$: writing $\sin \phi = 2 \sin \phi/2 \cos \phi/2$, we find that $A_R = 2A \cos \phi/2$, which agrees with (29.12). Now the idea that led us to consider the addition of several sources was that we might get a much stronger intensity in one direction than in another; that the nearby maxima which would have been present if there were only two sources will have gone down in strength. In order to see this effect, we plot the curve that comes from (30.3), taking $n$ to be enormously large and plotting the region near $\phi = 0$. In the first place, if $\phi$ is exactly $0$, we have $0/0$, but if $\phi$ is infinitesimal, the ratio of the two sines squared is simply $n^2$, since the sine and the angle are approximately equal. Thus the intensity of the maximum of the curve is equal to $n^2$ times the intensity of one oscillator. That is easy to see, because if they are all in phase, then the little vectors have no relative angle and all $n$ of them add up so the amplitude is $n$ times, and the intensity $n^2$ times, stronger. As the phase $\phi$ increases, the ratio of the two sines begins to fall off, and the first time it reaches zero is when $n\phi/2 =\pi$, because $\sin \pi = 0$. In other words, $\phi = 2\pi/n$ corresponds to the first minimum in the curve (Fig. 30–2). In terms of what is happening with the arrows in Fig. 30–1, the first minimum occurs when all the arrows come back to the starting point; that means that the total accumulated angle in all the arrows, the total phase difference between the first and last oscillator, must be $2\pi$ to complete the circle. Now we go to the next maximum, and we want to see that it is really much smaller than the first one, as we had hoped. We shall not go precisely to the maximum position, because both the numerator and the denominator of (30.3) are variant, but $\sin \phi/2$ varies quite slowly compared with $\sin n\phi/2$ when $n$ is large, so when $\sin^2 n\phi/2 = 1$ we are very close to the maximum. The next maximum of $\sin^2 n\phi/2$ comes at $n\phi/2 = 3\pi/2$, or $\phi = 3\pi/n$. This corresponds to the arrows having traversed the circle one and a half times. On putting $\phi = 3\pi/n$ into the formula to find the size of the maximum, we find that $\sin^2 3\pi/2 = 1$ in the numerator (because that is why we picked this angle), and in the denominator we have $\sin^2 3\pi/2n$. Now if $n$ is sufficiently large, then this angle is very small and the sine is equal to the angle; so for all practical purposes, we can put $\sin 3\pi/2n = 3\pi/2n$. Thus we find that the intensity at this maximum is $I= I_0(4n^2/9\pi^2)$. But $n^2I_0$ was the maximum intensity, and so we have $4/9\pi^2$ times the maximum intensity, which is about $0.045$, less than $5$ percent, of the maximum intensity! Of course there are decreasing intensities farther out. So we have a very sharp central maximum with very weak subsidiary maxima on the sides. It is possible to prove that the area of the whole curve, including all the little bumps, is equal to $2\pi nI_0$, or twice the area of the dotted rectangle in Fig. 30–2. Now let us consider further how we may apply Eq. (30.3) in different circumstances, and try to understand what is happening. Let us consider our sources to be all on a line, as drawn in Fig. 30–3. There are $n$ of them, all spaced by a distance $d$, and we shall suppose that the intrinsic relative phase, one to the next, is $\alpha$. Then if we are observing in a given direction $\theta$ from the normal, there is an additional phase $2\pi d \sin \theta/\lambda$ because of the time delay between each successive two, which we talked about before. Thus \begin{equation} \begin{aligned} \phi&=\alpha+2\pi d\sin\theta/\lambda\\ &=\alpha+kd\sin\theta. \end{aligned} \label{Eq:I:30:4} \end{equation} First, we shall take the case $\alpha = 0$. That is, all oscillators are in phase, and we want to know what the intensity is as a function of the angle $\theta$. In order to find out, we merely have to put $\phi = kd \sin \theta$ into formula (30.3) and see what happens. In the first place, there is a maximum when $\phi = 0$. That means that when all the oscillators are in phase there is a strong intensity in the direction $\theta = 0$. On the other hand, an interesting question is, where is the first minimum? That occurs when $\phi = 2\pi/n$. In other words, when $2\pi d \sin \theta/\lambda = 2\pi/n$, we get the first minimum of the curve. If we get rid of the $2\pi$’s so we can look at it a little better, it says that \begin{equation} \label{Eq:I:30:5} nd\sin\theta=\lambda. \end{equation} Now let us understand physically why we get a minimum at that position. $nd$ is the total length $L$ of the array. Referring to Fig. 30–3, we see that $nd \sin \theta= L \sin \theta= \Delta$. What (30.5) says is that when $\Delta$ is equal to one wavelength, we get a minimum. Now why do we get a minimum when $\Delta = \lambda$? Because the contributions of the various oscillators are then uniformly distributed in phase from $0^\circ$ to $360^\circ$. The arrows (Fig. 30–1) are going around a whole circle—we are adding equal vectors in all directions, and such a sum is zero. So when we have an angle such that $\Delta = \lambda$, we get a minimum. That is the first minimum. There is another important feature about formula (30.3), which is that if the angle $\phi$ is increased by any multiple of $2\pi$, it makes no difference to the formula. So we will get other strong maxima at $\phi = 2\pi$, $4\pi$, $6\pi$, and so forth. Near each of these great maxima the pattern of Fig. 30–2 is repeated. We may ask ourselves, what is the geometrical circumstance that leads to these other great maxima? The condition is that $\phi = 2\pi m$, where $m$ is any integer. That is, $2\pi d \sin \theta/\lambda = 2\pi m$. Dividing by $2\pi$, we see that \begin{equation} \label{Eq:I:30:6} d\sin\theta=m\lambda. \end{equation} This looks like the other formula, (30.5). No, that formula was $nd \sin \theta= \lambda$. The difference is that here we have to look at the individual sources, and when we say $d \sin \theta= m\lambda$, that means that we have an angle $\theta$ such that $\delta = m\lambda$. In other words, each source is now contributing a certain amount, and successive ones are out of phase by a whole multiple of $360^\circ$, and therefore are contributing in phase, because out of phase by $360^\circ$ is the same as being in phase. So they all contribute in phase and produce just as good a maximum as the one for $m = 0$ that we discussed before. The subsidiary bumps, the whole shape of the pattern, is just like the one near $\phi = 0$, with exactly the same minima on each side, etc. Thus such an array will send beams in various directions—each beam having a strong central maximum and a certain number of weak “side lobes.” The various strong beams are referred to as the zero-order beam, the first-order beam, etc., according to the value of $m$. $m$ is called the order of the beam. We call attention to the fact that if $d$ is less than $\lambda$, Eq. (30.6) can have no solution except $m = 0$, so that if the spacing is too small there is only one possible beam, the zero-order one centered at $\theta = 0$. (Of course, there is also a beam in the opposite direction.) In order to get subsidiary great maxima, we must have the spacing $d$ of the array greater than one wavelength.
1
30
Diffraction
2
The diffraction grating
In technical work with antennas and wires it is possible to arrange that all the phases of the little oscillators, or antennas, are equal. The question is whether and how we can do a similar thing with light. We cannot at the present time literally make little optical-frequency radio stations and hook them up with infinitesimal wires and drive them all with a given phase. But there is a very easy way to do what amounts to the same thing. Suppose that we had a lot of parallel wires, equally spaced at a spacing $d$, and a radiofrequency source very far away, practically at infinity, which is generating an electric field which arrives at each one of the wires at the same phase (it is so far away that the time delay is the same for all of the wires). (One can work out cases with curved arrays, but let us take a plane one.) Then the external electric field will drive the electrons up and down in each wire. That is, the field which is coming from the original source will shake the electrons up and down, and in moving, these represent new generators. This phenomenon is called scattering: a light wave from some source can induce a motion of the electrons in a piece of material, and these motions generate their own waves. Therefore all that is necessary is to set up a lot of wires, equally spaced, drive them with a radiofrequency source far away, and we have the situation that we want, without a whole lot of special wiring. If the incidence is normal, the phases will be equal, and we will get exactly the circumstance we have been discussing. Therefore, if the wire spacing is greater than the wavelength, we will get a strong intensity of scattering in the normal direction, and in certain other directions given by (30.6). This can also be done with light! Instead of wires, we use a flat piece of glass and make notches in it such that each of the notches scatters a little differently than the rest of the glass. If we then shine light on the glass, each one of the notches will represent a source, and if we space the lines very finely, but not closer than a wavelength (which is technically almost impossible anyway), then we would expect a miraculous phenomenon: the light not only will pass straight through, but there will also be a strong beam at a finite angle, depending on the spacing of the notches! Such objects have actually been made and are in common use—they are called diffraction gratings. In one of its forms, a diffraction grating consists of nothing but a plane glass sheet, transparent and colorless, with scratches on it. There are often several hundred scratches to the millimeter, very carefully arranged so as to be equally spaced. The effect of such a grating can be seen by arranging a projector so as to throw a narrow, vertical line of light (the image of a slit) onto a screen. When we put the grating into the beam, with its scratches vertical, we see that the line is still there but, in addition, on each side we have another strong patch of light which is colored. This, of course, is the slit image spread out over a wide angular range, because the angle $\theta$ in (30.6) depends upon $\lambda$, and lights of different colors, as we know, correspond to different frequencies, and therefore different wavelengths. The longest visible wavelength is red, and since $d \sin \theta=\lambda$, that requires a larger $\theta$. And we do, in fact, find that red is at a greater angle out from the central image! There should also be a beam on the other side, and indeed we see one on the screen. Then, there might be another solution of (30.6) when $m= 2$. We do see that there is something vaguely there—very weak—and there are even other beams beyond. We have just argued that all these beams ought to be of the same strength, but we see that they actually are not and, in fact, not even the first ones on the right and left are equal! The reason is that the grating has been carefully built to do just this. How? If the grating consists of very fine notches, infinitesimally wide, spaced evenly, then all the intensities would indeed be equal. But, as a matter of fact, although we have taken the simplest case, we could also have considered an array of pairs of antennas, in which each member of the pair has a certain strength and some relative phase. In this case, it is possible to get intensities which are different in the different orders. A grating is often made with little “sawtooth” cuts instead of little symmetrical notches. By carefully arranging the “sawteeth,” more light may be sent into one particular order of spectrum than into the others. In a practical grating, we would like to have as much light as possible in one of the orders. This may seem a complicated point to bring in, but it is a very clever thing to do, because it makes the grating more useful. So far, we have taken the case where all the phases of the sources are equal. But we also have a formula for $\phi$ when the phases differ from one to the next by an angle $\alpha$. That requires wiring up our antennas with a slight phase shift between each one. Can we do that with light? Yes, we can do it very easily, for suppose that there were a source of light at infinity, at an angle such that the light is coming in at an angle $\theta_{\text{in}}$, and let us say that we wish to discuss the scattered beam, which is leaving at an angle $\theta_{\text{out}}$ (Fig. 30–4). The $\theta_{\text{out}}$ is the same $\theta$ as we have had before, but the $\theta_{\text{in}}$ is merely a means for arranging that the phase of each source is different: the light coming from the distant driving source first hits one scratch, then the next, then the next, and so on, with a phase shift from one to the other, which, as we see, is $\alpha = -2\pi d \sin \theta_{\text{in}}/\lambda$. Therefore we have the formula for a grating in which light both comes in and goes out at an angle: \begin{equation} \label{Eq:I:30:7} \phi=2\pi d\sin\theta_{\text{out}}/\lambda - 2\pi d \sin \theta_{\text{in}}/\lambda. \end{equation} Let us try to find out where we get strong intensity in these circumstances. The condition for strong intensities is, of course, that $\phi$ should be a multiple of $2\pi$. There are several interesting points to be noted. One case of rather great interest is that which corresponds to $m = 0$, where $d$ is less than $\lambda$; in fact, this is the only solution. In this case we see that $\sin\theta_{\text{out}} = \sin\theta_{\text{in}}$, which may mean that $\theta_{\text{out}}$ is the supplement of $\theta_{\text{in}}$ so the light comes out in the same direction as the light which was exciting the grating. We might think that the light “goes right through.” No, it is different light that we are talking about. The light that goes right through is from the original source; what we are talking about is the new light which is generated by scattering. It turns out that the scattered light is going in the same direction as the original light, in fact it can interfere with it—a feature which we will study later. There is another solution for this same case: $\theta_{\text{out}}$ may equal $\theta_{\text{in}}$. So not only do we get a beam in the same direction as the incoming beam but also one in another direction, such that the angle of incidence is equal to the angle of scattering. This we call the reflected beam. So we begin to understand the basic machinery of reflection: the light that comes in generates motions of the atoms in the reflector, and the reflector then regenerates a new wave, and one of the solutions for the direction of scattering, the only solution if the spacing of the scatterers is small compared with one wavelength, is that the angle at which the light comes out is equal to the angle at which it comes in! Next, we discuss the special case when $d \to 0$. That is, we have just a solid piece of material, so to speak, but of finite length. In addition, we want the phase shift from one scatterer to the next to go to zero. In other words, we put more and more antennas between the other ones, so that each of the phase differences is getting smaller, but the number of antennas is increasing in such a way that the total phase difference, between one end of the line and the other, is constant. Let us see what happens to (30.3) if we keep the difference in phase $n\phi$ from one end to the other constant (say $n\phi=\Phi$), letting the number go to infinity and the phase shift $\phi$ of each one go to zero. But now $\phi$ is so small that $\sin \phi = \phi$, and if we also recognize $n^2I_0$ as $I_m$, the maximum intensity at the center of the beam, we find \begin{equation} \label{Eq:I:30:8} I=4I_m\sin^2\tfrac{1}{2}\Phi/\Phi^2. \end{equation} This limiting case is what is shown in Fig. 30–2. In such circumstances we find the same general kind of a picture as for finite spacing with $d<\lambda$; all the side lobes are practically the same as before, but there are no higher-order maxima. If the scatterers are all in phase, we get a maximum in the direction $\theta_{\text{out}} = 0$, and a minimum when the distance $\Delta$ is equal to $\lambda$, just as for finite $d$ and $n$. So we can even analyze a continuous distribution of scatterers or oscillators, by using integrals instead of summing. As an example, suppose there were a long line of oscillators, with the charge oscillating along the direction of the line (Fig. 30–5). From such an array the greatest intensity is perpendicular to the line. There is a little bit of intensity up and down from the equatorial plane, but it is very slight. With this result, we can handle a more complicated situation. Suppose we have a set of such lines, each producing a beam only in a plane perpendicular to the line. To find the intensity in various directions from a series of long wires, instead of infinitesimal wires, is the same problem as it was for infinitesimal wires, so long as we are in the central plane perpendicular to the wires; we just add the contribution from each of the long wires. That is why, although we actually analyzed only tiny antennas, we might as well have used a grating with long, narrow slots. Each of the long slots produces an effect only in its own direction, not up and down, but they are all set next to each other horizontally, so they produce interference that way. Thus we can build up more complicated situations by having various distributions of scatterers in lines, planes, or in space. The first thing we did was to consider scatterers in a line, and we have just extended the analysis to strips; we can work it out by just doing the necessary summations, adding the contributions from the individual scatterers. The principle is always the same.
1
30
Diffraction
3
Resolving power of a grating
We are now in a position to understand a number of interesting phenomena. For example, consider the use of a grating for separating wavelengths. We noticed that the whole spectrum was spread out on the screen, so a grating can be used as an instrument for separating light into its different wavelengths. One of the interesting questions is: supposing that there were two sources of slightly different frequency, or slightly different wavelength, how close together in wavelength could they be such that the grating would be unable to tell that there were really two different wavelengths there? The red and the blue were clearly separated. But when one wave is red and the other is slightly redder, very close, how close can they be? This is called the resolving power of the grating, and one way of analyzing the problem is as follows. Suppose that for light of a certain color we happen to have the maximum of the diffracted beam occurring at a certain angle. If we vary the wavelength the phase $2\pi d \sin \theta/\lambda$ is different, so of course the maximum occurs at a different angle. That is why the red and blue are spread out. How different in angle must it be in order for us to be able to see it? If the two maxima are exactly on top of each other, of course we cannot see them. If the maximum of one is far enough away from the other, then we can see that there is a double bump in the distribution of light. In order to be able to just make out the double bump, the following simple criterion, called Rayleigh’s criterion, is usually used (Fig. 30–6). It is that the first minimum from one bump should sit at the maximum of the other. Now it is very easy to calculate, when one minimum sits on the other maximum, how much the difference in wavelength is. The best way to do it is geometrically. In order to have a maximum for wavelength $\lambda'$, the distance $\Delta$ (Fig. 30–3) must be $n\lambda'$, and if we are looking at the $m$th-order beam, it is $mn\lambda'$. In other words, $2\pi d \sin \theta/\lambda' = 2\pi m$, so $nd \sin \theta$, which is $\Delta$, is $m\lambda'$ times $n$, or $mn\lambda'$. For the other beam, of wavelength $\lambda$, we want to have a minimum at this angle. That is, we want $\Delta$ to be exactly one wavelength $\lambda$ more than $mn\lambda$. That is, $\Delta = mn\lambda + \lambda = mn\lambda'$. Thus if $\lambda' = \lambda+ \Delta\lambda$, we find \begin{equation} \label{Eq:I:30:9} \Delta\lambda/\lambda = 1/mn. \end{equation} The ratio $\lambda/\Delta\lambda$ is called the resolving power of a grating; we see that it is equal to the total number of lines in the grating, times the order. It is not hard to prove that this formula is equivalent to the formula that the error in frequency is equal to the reciprocal time difference between extreme paths that are allowed to interfere:1 \begin{equation*} \Delta\nu=1/T. \end{equation*} In fact, that is the best way to remember it, because the general formula works not only for gratings, but for any other instrument whatsoever, while the special formula (30.9) depends on the fact that we are using a grating.
1
30
Diffraction
4
The parabolic antenna
Now let us consider another problem in resolving power. This has to do with the antenna of a radio telescope, used for determining the position of radio sources in the sky, i.e., how large they are in angle. Of course if we use any old antenna and find signals, we would not know from what direction they came. We are very interested to know whether the source is in one place or another. One way we can find out is to lay out a whole series of equally spaced dipole wires on the Australian landscape. Then we take all the wires from these antennas and feed them into the same receiver, in such a way that all the delays in the feed lines are equal. Thus the receiver receives signals from all of the dipoles in phase. That is, it adds all the waves from every one of the dipoles in the same phase. Now what happens? If the source is directly above the array, at infinity or nearly so, then its radiowaves will excite all the antennas in the same phase, so they all feed the receiver together. Now suppose that the radio source is at a slight angle $\theta$ from the vertical. Then the various antennas are receiving signals a little out of phase. The receiver adds all these out-of-phase signals together, and so we get nothing, if the angle $\theta$ is too big. How big may the angle be? Answer: we get zero if the angle $\Delta/L = \theta$ (Fig. 30–3) corresponds to a $360^\circ$ phase shift, that is, if $\Delta$ is the wavelength $\lambda$. This is because the vector contributions form together a complete polygon with zero resultant. The smallest angle that can be resolved by an antenna array of length $L$ is $\theta = \lambda/L$. Notice that the receiving pattern of an antenna such as this is exactly the same as the intensity distribution we would get if we turned the receiver around and made it into a transmitter. This is an example of what is called a reciprocity principle. It turns out, in fact, to be generally true for any arrangement of antennas, angles, and so on, that if we first work out what the relative intensities would be in various directions if the receiver were a transmitter instead, then the relative directional sensitivity of a receiver with the same external wiring, the same array of antennas, is the same as the relative intensity of emission would be if it were a transmitter. Some radio antennas are made in a different way. Instead of having a whole lot of dipoles in a long line, with a lot of feed wires, we may arrange them not in a line but in a curve, and put the receiver at a certain point where it can detect the scattered waves. This curve is cleverly designed so that if the radiowaves are coming down from above, and the wires scatter, making a new wave, the wires are so arranged that the scattered waves reach the receiver all at the same time (Fig. 26–12). In other words, the curve is a parabola, and when the source is exactly on its axis, we get a very strong intensity at the focus. In this case we understand very clearly what the resolving power of such an instrument is. The arranging of the antennas on a parabolic curve is not an essential point. It is only a convenient way to get all the signals to the same point with no relative delay and without feed wires. The angle such an instrument can resolve is still $\theta = \lambda/L$, where $L$ is the separation of the first and last antennas. It does not depend on the spacing of the antennas and they may be very close together or in fact be all one piece of metal. Now we are describing a telescope mirror, of course. We have found the resolving power of a telescope! (Sometimes the resolving power is written $\theta = 1.22\lambda/L$, where $L$ is the diameter of the telescope. The reason that it is not exactly $\lambda/L$ is this: when we worked out that $\theta = \lambda/L$, we assumed that all the lines of dipoles were equal in strength, but when we have a circular telescope, which is the way we usually arrange a telescope, not as much signal comes from the outside edges, because it is not like a square, where we get the same intensity all along a side. We get somewhat less because we are using only part of the telescope there; thus we can appreciate that the effective diameter is a little shorter than the true diameter, and that is what the $1.22$ factor tells us. In any case, it seems a little pedantic to put such precision into the resolving power formula.2)
1
30
Diffraction
5
Colored films; crystals
The above, then, are some of the effects of interference obtained by adding the various waves. But there are a number of other examples, and even though we do not understand the fundamental mechanism yet, we will some day, and we can understand even now how the interference occurs. For example, when a light wave hits a surface of a material with an index $n$, let us say at normal incidence, some of the light is reflected. The reason for the reflection we are not in a position to understand right now; we shall discuss it later. But suppose we know that some of the light is reflected both on entering and leaving a refracting medium. Then, if we look at the reflection of a light source in a thin film, we see the sum of two waves; if the thicknesses are small enough, these two waves will produce an interference, either constructive or destructive, depending on the signs of the phases. It might be, for instance, that for red light, we get an enhanced reflection, but for blue light, which has a different wavelength, perhaps we get a destructively interfering reflection, so that we see a bright red reflection. If we change the thickness, i.e., if we look at another place where the film is thicker, it may be reversed, the red interfering and the blue not, so it is bright blue, or green, or yellow, or whatnot. So we see colors when we look at thin films and the colors change if we look at different angles, because we can appreciate that the timings are different at different angles. Thus we suddenly appreciate another hundred thousand situations involving the colors that we see on oil films, soap bubbles, etc. at different angles. But the principle is all the same: we are only adding waves at different phases. As another important application of diffraction, we may mention the following. We used a grating and we saw the diffracted image on the screen. If we had used monochromatic light, it would have been at a certain specific place. Then there were various higher-order images also. From the positions of the images, we could tell how far apart the lines on the grating were, if we knew the wavelength of the light. From the difference in intensity of the various images, we could find out the shape of the grating scratches, whether the grating was made of wires, sawtooth notches, or whatever, without being able to see them. This principle is used to discover the positions of the atoms in a crystal. The only complication is that a crystal is three-dimensional; it is a repeating three-dimensional array of atoms. We cannot use ordinary light, because we must use something whose wavelength is less than the space between the atoms or we get no effect; so we must use radiation of very short wavelength, i.e., x-rays. So, by shining x-rays into a crystal and by noticing how intense is the reflection in the various orders, we can determine the arrangement of the atoms inside without ever being able to see them with the eye! It is in this way that we know the arrangement of the atoms in various substances, which permitted us to draw those pictures in the first chapter, showing the arrangement of atoms in salt, and so on. We shall later come back to this subject and discuss it in more detail, and therefore we say no more about this most remarkable idea at present.
1
30
Diffraction
6
Diffraction by opaque screens
Now we come to a very interesting situation. Suppose that we have an opaque sheet with holes in it, and a light on one side of it. We want to know what the intensity is on the other side. What most people say is that the light shines through the holes, and produces an effect on the other side. It will turn out that one gets the right answer, to an excellent approximation, if he assumes that there are sources distributed with uniform density across the open holes, and that the phases of these sources are the same as they would have been if the opaque material were absent. Of course, actually there are no sources at the holes, in fact that is the only place that there are certainly no sources. Nevertheless, we get the correct diffraction patterns by considering the holes to be the only places that there are sources; that is a rather peculiar fact. We shall explain later why this is true, but for now let us just suppose that it is. In the theory of diffraction there is another kind of diffraction that we should briefly discuss. It is usually not discussed in an elementary course as early as this, only because the mathematical formulas involved in adding these little vectors are a little elaborate. Otherwise it is exactly the same as we have been doing all along. All the interference phenomena are the same; there is nothing very much more advanced involved, only the circumstances are more complicated and it is harder to add the vectors together, that is all. Suppose that we have light coming in from infinity, casting a shadow of an object. Figure 30–7 shows a screen on which the shadow of an object $AB$ is made by a light source very far away compared with one wavelength. Now we would expect that outside the shadow, the intensity is all bright, and inside it, it is all dark. As a matter of fact, if we plot the intensity as a function of position near the shadow edge, the intensity rises and then overshoots, and wobbles, and oscillates about in a very peculiar manner near this edge (Fig. 30–9). We now shall discuss the reason for this. If we use the theorem that we have not yet proved, then we can replace the actual problem by a set of effective sources uniformly distributed over the open space beyond the object. We imagine a large number of very closely spaced antennas, and we want the intensity at some point $P$. That looks just like what we have been doing. Not quite; because our screen is not at infinity. We do not want the intensity at infinity, but at a finite point. To calculate the intensity at some particular place, we have to add the contributions from all the antennas. First there is an antenna at $D$, exactly opposite $P$; if we go up a little bit in angle, let us say a height $h$, then there is an increase in delay (there is also a change in amplitude because of the change in distance, but this is a very small effect if we are at all far away, and is much less important than the difference in the phases). Now the path difference $EP - DP$ is approximately $h^2/2s$, so that the phase difference is proportional to the square of how far we go from $D$, while in our previous work $s$ was infinite, and the phase difference was linearly proportional to $h$. When the phases are linearly proportional, each vector adds at a constant angle to the next vector. What we now need is a curve which is made by adding a lot of infinitesimal vectors with the requirement that the angle they make shall increase, not linearly, but as the square of the length of the curve. To construct that curve involves slightly advanced mathematics, but we can always construct it by actually drawing the arrows and measuring the angles. In any case, we get the marvelous curve (called Cornu’s spiral) shown in Fig. 30–8. Now how do we use this curve? If we want the intensity, let us say, at point $P$, we add a lot of contributions of different phases from point $D$ on up to infinity, and from $D$ down only to point $B_P$. So we start at $B_P$ in Fig. 30–8, and draw a series of arrows of ever-increasing angle. Therefore the total contribution above point $B_P$ all goes along the spiraling curve. If we were to stop integrating at some place, then the total amplitude would be a vector from $B$ to that point; in this particular problem we are going to infinity, so the total answer is the vector $\FLPB_{P\infty}$. Now the position on the curve which corresponds to point $B_P$ on the object depends upon where point $P$ is located, since point $D$, the inflection point, always corresponds to the position of point $P$. Thus, depending upon where $P$ is located above $B$, the beginning point will fall at various positions on the lower left part of the curve, and the resultant vector $\FLPB_{P\infty}$, will have many maxima and minima (Fig. 30–9). On the other hand, if we are at $Q$, on the other side of $P$, then we are using only one end of the spiral curve, and not the other end. In other words, we do not even start at $D$, but at $B_Q$, so on this side we get an intensity which continuously falls off as $Q$ goes farther into the shadow. One point that we can immediately calculate with ease, to show that we really understand it, is the intensity exactly opposite the edge. The intensity here is $1/4$ that of the incident light. Reason: Exactly at the edge (so the endpoint $B$ of the arrow is at $D$ in Fig. 30–8) we have half the curve that we would have had if we were far into the bright region. If our point $R$ is far into the light we go from one end of the curve to the other, that is, one full unit vector; but if we are at the edge of the shadow, we have only half the amplitude—$1/4$ the intensity. In this chapter we have been finding the intensity produced in various directions from various distributions of sources. As a final example we shall derive a formula which we shall need for the next chapter on the theory of the index of refraction. Up to this point relative intensities have been sufficient for our purpose, but this time we shall find the complete formula for the field in the following situation.
1
30
Diffraction
7
The field of a plane of oscillating charges
Suppose that we have a plane full of sources, all oscillating together, with their motion in the plane and all having the same amplitude and phase. What is the field at a finite, but very large, distance away from the plane? (We cannot get very close, of course, because we do not have the right formulas for the field close to the sources.) If we let the plane of the charges be the $xy$-plane, then we want to find the field at the point $P$ far out on the $z$-axis (Fig. 30–10). We suppose that there are $\eta$ charges per unit area of the plane, and that each one of them has a charge $q$. All of the charges move with simple harmonic motion, with the same direction, amplitude, and phase. We let the motion of each charge, with respect to its own average position, be $x_0\cos\omega t$. Or, using the complex notation and remembering that the real part represents the actual motion, the motion can be described by $x_0e^{i\omega t}$. Now we find the field at the point $P$ from all of the charges by finding the field there from each charge $q$, and then adding the contributions from all the charges. We know that the radiation field is proportional to the acceleration of the charge, which is $-\omega^2x_0e^{i\omega t}$ (and is the same for every charge). The electric field that we want at the point $P$ due to a charge at the point $Q$ is proportional to the acceleration of the charge $q$, but we have to remember that the field at the point $P$ at the instant $t$ is given by the acceleration of the charge at the earlier time $t' = t - r/c$, where $r/c$ is the time it takes the waves to travel the distance $r$ from $Q$ to $P$. Therefore the field at $P$ is proportional to \begin{equation} \label{Eq:I:30:10} -\omega^2x_0e^{i\omega(t-r/c)}. \end{equation} Using this value for the acceleration as seen from $P$ in our formula for the electric field at large distances from a radiating charge, we get \begin{equation} \label{Eq:I:30:11} \begin{pmatrix} \text{Electric field at $P$}\\ \text{from charge at $Q$} \end{pmatrix} \approx\frac{q}{4\pi\epsO c^2}\, \frac{\omega^2x_0e^{i\omega(t-r/c)}}{r}. \end{equation} \begin{equation} \begin{gathered} \begin{pmatrix} \text{Electric field at $P$}\\ \text{from charge at $Q$} \end{pmatrix} \approx\\[1.5ex] \frac{q}{4\pi\epsO c^2}\, \frac{\omega^2x_0e^{i\omega(t-r/c)}}{r}. \end{gathered} \label{Eq:I:30:11} \end{equation} Now this formula is not quite right, because we should have used not the acceleration of the charge but its component perpendicular to the line $QP$. We shall suppose, however, that the point $P$ is so far away, compared with the distance of the point $Q$ from the axis (the distance $\rho$ in Fig. 30–10), for those charges that we need to take into account, that we can leave out the cosine factor (which would be nearly equal to $1$ anyway). To get the total field at $P$, we now add the effects of all the charges in the plane. We should, of course, make a vector sum. But since the direction of the electric field is nearly the same for all the charges, we may, in keeping with the approximation we have already made, just add the magnitudes of the fields. To our approximation the field at $P$ depends only on the distance $r$, so all charges at the same $r$ produce equal fields. So we add, first, the fields of those charges in a ring of width $d\rho$ and radius $\rho$. Then, by taking the integral over all $\rho$, we will obtain the total field. The number of charges in the ring is the product of the surface area of the ring, $2\pi\rho\,d\rho$, and $\eta$, the number of charges per unit area. We have, then, \begin{equation} \label{Eq:I:30:12} \text{Total field at $P$} = \int\frac{q}{4\pi\epsO c^2}\, \frac{\omega^2x_0e^{i\omega(t-r/c)}}{r}\cdot\eta\cdot 2\pi\rho\,d\rho. \end{equation} \begin{equation} \begin{gathered} \text{Total field at $P$} =\\[1.25ex] \int\frac{q}{4\pi\epsO c^2}\, \frac{\omega^2x_0e^{i\omega(t-r/c)}}{r}\cdot\eta\cdot 2\pi\rho\,d\rho. \end{gathered} \label{Eq:I:30:12} \end{equation} We wish to evaluate this integral from $\rho = 0$ to $\rho = \infty$. The variable $t$, of course, is to be held fixed while we do the integral, so the only varying quantities are $\rho$ and $r$. Leaving out all the constant factors, including the factor $e^{i\omega t}$, for the moment, the integral we wish is \begin{equation} \label{Eq:I:30:13} \int_{\rho=0}^{\rho=\infty} \frac{e^{-i\omega r/c}}{r}\,\rho\,d\rho. \end{equation} To do this integral we need to use the relation between $r$ and $\rho$: \begin{equation} \label{Eq:I:30:14} r^2=\rho^2+z^2. \end{equation} Since $z$ is independent of $\rho$, when we take the differential of this equation, we get \begin{equation*} 2r\,dr=2\rho\,d\rho, \end{equation*} which is lucky, since in our integral we can replace $\rho\,d\rho$ by $r\,dr$ and the $r$ will cancel the one in the denominator. The integral we want is then the simpler one \begin{equation} \label{Eq:I:30:15} \int_{r=z}^{r=\infty} e^{-i\omega r/c}\,dr. \end{equation} To integrate an exponential is very easy. We divide by the coefficient of $r$ in the exponent and evaluate the exponential at the limits. But the limits of $r$ are not the same as the limits of $\rho$. When $\rho = 0$, we have $r = z$, so the limits of $r$ are $z$ to infinity. We get for the integral \begin{equation} \label{Eq:I:30:16} -\frac{c}{i\omega}[e^{-i\infty}-e^{-(i\omega/c)z}], \end{equation} where we have written $\infty$ for $(\omega/c)\infty$, since they both just mean a very large number! Now $e^{-i\infty}$ is a mysterious quantity. Its real part, for example, is $\cos\,(-\infty)$, which, mathematically speaking, is completely indefinite (although we would expect it to be somewhere—or everywhere (?)—between $+1$ and $-1$!). But in a physical situation, it can mean something quite reasonable, and usually can just be taken to be zero. To see that this is so in our case, we go back to consider again the original integral (30.15). We can understand (30.15) as a sum of many small complex numbers, each of magnitude $\Delta r$, and with the angle $\theta = -\omega r/c$ in the complex plane. We can try to evaluate the sum by a graphical method. In Fig. 30–11 we have drawn the first five pieces of the sum. Each segment of the curve has the length $\Delta r$ and is placed at the angle $\Delta\theta = -\omega\,\Delta r/c$ with respect to the preceding piece. The sum for these first five pieces is represented by the arrow from the starting point to the end of the fifth segment. As we continue to add pieces we shall trace out a polygon until we get back to the starting point (approximately) and then start around once more. Adding more pieces, we just go round and round, staying close to a circle whose radius is easily shown to be $c/\omega$. We can see now why the integral does not give a definite answer! But now we have to go back to the physics of the situation. In any real situation the plane of charges cannot be infinite in extent, but must sometime stop. If it stopped suddenly, and was exactly circular in shape, our integral would have some value on the circle in Fig. 30–11. If, however, we let the number of charges in the plane gradually taper off at some large distance from the center (or else stop suddenly but in an irregular shape so for larger $\rho$ the entire ring of width $d\rho$ no longer contributes), then the coefficient $\eta$ in the exact integral would decrease toward zero. Since we are adding smaller pieces but still turning through the same angle, the graph of our integral would then become a curve which is a spiral. The spiral would eventually end up at the center of our original circle, as drawn in Fig. 30–12. The physically correct integral is the complex number $A$ in the figure represented by the interval from the starting point to the center of the circle, which is just equal to \begin{equation} \label{Eq:I:30:17} \frac{c}{i\omega}\,e^{-i\omega z/c}, \end{equation} as you can work out for yourself. This is the same result we would get from Eq. (30.16) if we set $e^{-i\infty} = 0$. (There is also another reason why the contribution to the integral tapers off for large values of $r$, and that is the factor we have omitted for the projection of the acceleration on the plane perpendicular to the line $PQ$.) We are, of course, interested only in physical situations, so we will take $e^{-i\infty}$ equal to zero. Returning to our original formula (30.12) for the field and putting back all of the factors that go with the integral, we have the result \begin{equation} \label{Eq:I:30:18} \text{Total field at $P$} = -\frac{\eta q}{2\epsO c}\, i\omega x_0e^{i\omega(t-z/c)} \end{equation} \begin{equation} \begin{gathered} \text{Total field at $P$} =\\[1.25ex] -\frac{\eta q}{2\epsO c}\, i\omega x_0e^{i\omega(t-z/c)} \end{gathered} \label{Eq:I:30:18} \end{equation} (remembering that $1/i = -i$). It is interesting to note that ($i\omega x_0e^{i\omega t}$) is just equal to the velocity of the charges, so that we can also write the equation for the field as \begin{equation} \label{Eq:I:30:19} \text{Total field at $P$} = -\frac{\eta q}{2\epsO c}\, [\text{velocity of charges}]_{\text{at $t-z/c$}}, \end{equation} \begin{equation} \begin{gathered} \text{Total field at $P$} =\\[1ex] -\frac{\eta q}{2\epsO c}\, [\text{velocity of charges}]_{\text{at $t-z/c$}}, \end{gathered} \label{Eq:I:30:19} \end{equation} which is a little strange, because the retardation is just by the distance $z$, which is the shortest distance from $P$ to the plane of charges. But that is the way it comes out—fortunately a rather simple formula. (We may add, by the way, that although our derivation is valid only for distances far from the plane of oscillatory charges, it turns out that the formula (30.18) or (30.19) is correct at any distance $z$, even for $z < \lambda$.)
1
31
The Origin of the Refractive Index
1
The index of refraction
We have said before that light goes slower in water than in air, and slower, slightly, in air than in vacuum. This effect is described by the index of refraction $n$. Now we would like to understand how such a slower velocity could come about. In particular, we should try to see what the relation is to some physical assumptions, or statements, we made earlier, which were the following: That the total electric field in any physical circumstance can always be represented by the sum of the fields from all the charges in the universe. That the field from a single charge is given by its acceleration evaluated with a retardation at the speed $c$, always (for the radiation field). But, for a piece of glass, you might think: “Oh, no, you should modify all this. You should say it is retarded at the speed $c/n$.” That, however, is not right, and we have to understand why it is not. It is approximately true that light or any electrical wave does appear to travel at the speed $c/n$ through a material whose index of refraction is $n$, but the fields are still produced by the motions of all the charges—including the charges moving in the material—and with these basic contributions of the field travelling at the ultimate velocity $c$. Our problem is to understand how the apparently slower velocity comes about. We shall try to understand the effect in a very simple case. A source which we shall call “the external source” is placed a large distance away from a thin plate of transparent material, say glass. We inquire about the field at a large distance on the opposite side of the plate. The situation is illustrated by the diagram of Fig. 31–1, where $S$ and $P$ are imagined to be very far away from the plate. According to the principles we have stated earlier, an electric field anywhere that is far from all moving charges is the (vector) sum of the fields produced by the external source (at $S$) and the fields produced by each of the charges in the plate of glass, every one with its proper retardation at the velocity $c$. Remember that the contribution of each charge is not changed by the presence of the other charges. These are our basic principles. The field at $P$ can be written thus: \begin{equation} \label{Eq:I:31:1} \FLPE=\sum_{\text{all charges}}\FLPE_{\text{each charge}} \end{equation} or \begin{equation} \label{Eq:I:31:2} \FLPE=\FLPE_s+\sum_{\text{all other charges}}\FLPE_{\text{each charge}}, \end{equation} where $\FLPE_s$ is the field due to the source alone and would be precisely the field at $P$ if there were no material present. We expect the field at $P$ to be different from $\FLPE_s$ if there are any other moving charges. Why should there be charges moving in the glass? We know that all material consists of atoms which contain electrons. When the electric field of the source acts on these atoms it drives the electrons up and down, because it exerts a force on the electrons. And moving electrons generate a field—they constitute new radiators. These new radiators are related to the source $S$, because they are driven by the field of the source. The total field is not just the field of the source $S$, but it is modified by the additional contribution from the other moving charges. This means that the field is not the same as the one which was there before the glass was there, but is modified, and it turns out that it is modified in such a way that the field inside the glass appears to be moving at a different speed. That is the idea which we would like to work out quantitatively. Now this is, in the exact case, pretty complicated, because although we have said that all the other moving charges are driven by the source field, that is not quite true. If we think of a particular charge, it feels not only the source, but like anything else in the world, it feels all of the charges that are moving. It feels, in particular, the charges that are moving somewhere else in the glass. So the total field which is acting on a particular charge is a combination of the fields from the other charges, whose motions depend on what this particular charge is doing! You can see that it would take a complicated set of equations to get the complete and exact formula. It is so complicated that we postpone this problem until next year. Instead we shall work out a very simple case in order to understand all the physical principles very clearly. We take a circumstance in which the effects from the other atoms are very small relative to the effects from the source. In other words, we take a material in which the total field is not modified very much by the motion of the other charges. That corresponds to a material in which the index of refraction is very close to $1$, which will happen, for example, if the density of the atoms is very low. Our calculation will be valid for any case in which the index is for any reason very close to $1$. In this way we shall avoid the complications of the most general, complete solution. Incidentally, you should notice that there is another effect caused by the motion of the charges in the plate. These charges will also radiate waves back toward the source $S$. This backward-going field is the light we see reflected from the surfaces of transparent materials. It does not come from just the surface. The backward radiation comes from everywhere in the interior, but it turns out that the total effect is equivalent to a reflection from the surfaces. These reflection effects are beyond our approximation at the moment because we shall be limited to a calculation for a material with an index so close to $1$ that very little light is reflected. Before we proceed with our study of how the index of refraction comes about, we should understand that all that is required to understand refraction is to understand why the apparent wave velocity is different in different materials. The bending of light rays comes about just because the effective speed of the waves is different in the materials. To remind you how that comes about we have drawn in Fig. 31–2 several successive crests of an electric wave which arrives from a vacuum onto the surface of a block of glass. The arrow perpendicular to the wave crests indicates the direction of travel of the wave. Now all oscillations in the wave must have the same frequency. (We have seen that driven oscillations have the same frequency as the driving source.) This means, also, that the wave crests for the waves on both sides of the surface must have the same spacing along the surface because they must travel together, so that a charge sitting at the boundary will feel only one frequency. The shortest distance between crests of the wave, however, is the wavelength which is the velocity divided by the frequency. On the vacuum side it is $\lambda_0 = 2\pi c/\omega$, and on the other side it is $\lambda = 2\pi v/\omega$ or $2\pi c/\omega n$, if $v = c/n$ is the velocity of the wave. From the figure we can see that the only way for the waves to “fit” properly at the boundary is for the waves in the material to be travelling at a different angle with respect to the surface. From the geometry of the figure you can see that for a “fit” we must have $\lambda_0/\sin \theta_0 = \lambda/\sin \theta$, or $\sin\theta_0/\sin\theta= n$, which is Snell’s law. We shall, for the rest of our discussion, consider only why light has an effective speed of $c/n$ in material of index $n$, and no longer worry, in this chapter, about the bending of the light direction. We go back now to the situation shown in Fig. 31–1. We see that what we have to do is to calculate the field produced at $P$ by all the oscillating charges in the glass plate. We shall call this part of the field $E_a$, and it is just the sum written as the second term in Eq. (31.2). When we add it to the term $E_s$, due to the source, we will have the total field at $P$. This is probably the most complicated thing that we are going to do this year, but it is complicated only in that there are many pieces that have to be put together; each piece, however, is very simple. Unlike other derivations where we say, “Forget the derivation, just look at the answer!,” in this case we do not need the answer so much as the derivation. In other words, the thing to understand now is the physical machinery for the production of the index. To see where we are going, let us first find out what the “correction field” $E_a$ would have to be if the total field at $P$ is going to look like radiation from the source that is slowed down while passing through the thin plate. If the plate had no effect on it, the field of a wave travelling to the right (along the $z$-axis) would be \begin{equation} \label{Eq:I:31:3} E_s=E_0\cos\omega(t-z/c) \end{equation} or, using the exponential notation, \begin{equation} \label{Eq:I:31:4} E_s=E_0e^{i\omega(t-z/c)}. \end{equation} Now what would happen if the wave travelled more slowly in going through the plate? Let us call the thickness of the plate $\Delta z$. If the plate were not there the wave would travel the distance $\Delta z$ in the time $\Delta z/c$. But if it appears to travel at the speed $c/n$ then it should take the longer time $n\,\Delta z/c$ or the additional time $\Delta t = (n - 1)\,\Delta z/c$. After that it would continue to travel at the speed $c$ again. We can take into account the extra delay in getting through the plate by replacing $t$ in Eq. (31.4) by $(t - \Delta t)$ or by $[t - (n - 1)\,\Delta z/c]$. So the wave after insertion of the plate should be written \begin{equation} \label{Eq:I:31:5} E_{\text{after plate}}=E_0e^{i\omega[t - (n - 1)\,\Delta z/c-z/c]}. \end{equation} We can also write this equation as \begin{equation} \label{Eq:I:31:6} E_{\text{after plate}}=e^{-i\omega(n-1)\,\Delta z/c} E_0e^{i\omega(t - z/c)}, \end{equation} which says that the wave after the plate is obtained from the wave which could exist without the plate, i.e., from $E_s$, by multiplying by the factor $e^{-i\omega(n-1)\,\Delta z/c}$. Now we know that multiplying an oscillating function like $e^{i\omega t}$ by a factor $e^{i\theta}$ just says that we change the phase of the oscillation by the angle $\theta$, which is, of course, what the extra delay in passing through the thickness $\Delta z$ has done. It has retarded the phase by the amount $\omega(n - 1)\,\Delta z/c$ (retarded, because of the minus sign in the exponent). We have said earlier that the plate should add a field $E_a$ to the original field $E_s = E_0e^{i\omega(t-z/c)}$, but we have found instead that the effect of the plate is to multiply the field by a factor which shifts its phase. However, that is really all right because we can get the same result by adding a suitable complex number. It is particularly easy to find the right number to add in the case that $\Delta z$ is small, for you will remember that if $x$ is a small number then $e^x$ is nearly equal to $(1 + x)$. We can write, therefore, \begin{equation} \label{Eq:I:31:7} e^{-i\omega(n-1)\,\Delta z/c}\approx1-i\omega(n-1)\,\Delta z/c. \end{equation} Using this approximation in Eq. (31.6), we have \begin{equation} \label{Eq:I:31:8} E_{\text{after plate}}= \underbrace{\vphantom{\frac{i}{c}}E_0 e^{i\omega(t-z/c)}}_{\displaystyle E_s}- \underbrace{\frac{i\omega(n-1)\,\Delta z}{c}\, E_0e^{i\omega(t-z/c)}}_{\displaystyle E_a}. \end{equation} The first term is just the field from the source, and the second term must just be equal to $E_a$, the field produced to the right of the plate by the oscillating charges of the plate—expressed here in terms of the index of refraction $n$, and depending, of course, on the strength of the wave from the source. What we have been doing is easily visualized if we look at the complex number diagram in Fig. 31–3. We first draw the number $E_s$ (we chose some values for $z$ and $t$ so that $E_s$ comes out horizontal, but this is not necessary). The delay due to slowing down in the plate would delay the phase of this number, that is, it would rotate $E_s$ through a negative angle. But this is equivalent to adding the small vector $E_a$ at roughly right angles to $E_s$. But that is just what the factor $-i$ means in the second term of Eq. (31.8). It says that if $E_s$ is real, then $E_a$ is negative imaginary or that, in general, $E_s$ and $E_a$ make a right angle.
1
31
The Origin of the Refractive Index
2
The field due to the material
We now have to ask: Is the field $E_a$ obtained in the second term of Eq. (31.8) the kind we would expect from oscillating charges in the plate? If we can show that it is, we will then have calculated what the index $n$ should be! [Since $n$ is the only nonfundamental number in Eq. (31.8).] We turn now to calculating what field $E_a$ the charges in the material will produce. (To help you keep track of the many symbols we have used up to now, and will be using in the rest of our calculation, we have put them all together in Table 31–1.) If the source $S$ (of Fig. 31–1) is far off to the left, then the field $E_s$ will have the same phase everywhere on the plate, so we can write that in the neighborhood of the plate \begin{equation} \label{Eq:I:31:9} E_s=E_0e^{i\omega(t-z/c)}. \end{equation} Right at the plate, where $z = 0$, we will have \begin{equation} \label{Eq:I:31:10} E_s=E_0e^{i\omega t}\text{ (at the plate)}. \end{equation} Each of the electrons in the atoms of the plate will feel this electric field and will be driven up and down (we assume the direction of $E_0$ is vertical) by the electric force $qE$. To find what motion we expect for the electrons, we will assume that the atoms are little oscillators, that is, that the electrons are fastened elastically to the atoms, which means that if a force is applied to an electron its displacement from its normal position will be proportional to the force. You may think that this is a funny model of an atom if you have heard about electrons whirling around in orbits. But that is just an oversimplified picture. The correct picture of an atom, which is given by the theory of wave mechanics, says that, so far as problems involving light are concerned, the electrons behave as though they were held by springs. So we shall suppose that the electrons have a linear restoring force which, together with their mass $m$, makes them behave like little oscillators, with a resonant frequency $\omega_0$. We have already studied such oscillators, and we know that the equation of their motion is written this way: \begin{equation} \label{Eq:I:31:11} m\biggl(\frac{d^2x}{dt^2}+\omega_0^2x\biggr)=F, \end{equation} where $F$ is the driving force. For our problem, the driving force comes from the electric field of the wave from the source, so we should use \begin{equation} \label{Eq:I:31:12} F=q_eE_s=q_eE_0e^{i\omega t}, \end{equation} where $q_e$ is the electric charge on the electron and for $E_s$ we use the expression $E_s = E_0e^{i\omega t}$ from (31.10). Our equation of motion for the electron is then \begin{equation} \label{Eq:I:31:13} m\biggl(\frac{d^2x}{dt^2}+\omega_0^2x\biggr)=q_eE_0e^{i\omega t}. \end{equation} We have solved this equation before, and we know that the solution is \begin{equation} \label{Eq:I:31:14} x=x_0e^{i\omega t}, \end{equation} where, by substituting in (31.13), we find that \begin{equation} \label{Eq:I:31:15} x_0=\frac{q_eE_0}{m(\omega_0^2-\omega^2)}, \end{equation} so that \begin{equation} \label{Eq:I:31:16} x=\frac{q_eE_0}{m(\omega_0^2-\omega^2)}\,e^{i\omega t}. \end{equation} We have what we needed to know—the motion of the electrons in the plate. And it is the same for every electron, except that the mean position (the “zero” of the motion) is, of course, different for each electron. Now we are ready to find the field $E_a$ that these atoms produce at the point $P$, because we have already worked out (at the end of Chapter 30) what field is produced by a sheet of charges that all move together. Referring back to Eq. (30.19), we see that the field $E_a$ at $P$ is just a negative constant times the velocity of the charges retarded in time by the amount $z/c$. Differentiating $x$ in Eq. (31.16) to get the velocity, and sticking in the retardation [or just putting $x_0$ from (31.15) into (30.18)] yields \begin{equation} \label{Eq:I:31:17} E_a=-\frac{\eta q_e}{2\epsO c}\biggl[ i\omega\,\frac{q_eE_0}{m(\omega_0^2-\omega^2)}\, e^{i\omega(t-z/c)}\biggr]. \end{equation} Just as we expected, the driven motion of the electrons produced an extra wave which travels to the right (that is what the factor $e^{i\omega(t-z/c)}$ says), and the amplitude of this wave is proportional to the number of atoms per unit area in the plate (the factor $\eta$) and also proportional to the strength of the source field (the factor $E_0$). Then there are some factors which depend on the atomic properties ($q_e$, $m$, and $\omega_0$), as we should expect. The most important thing, however, is that this formula (31.17) for $E_a$ looks very much like the expression for $E_a$ that we got in Eq. (31.8) by saying that the original wave was delayed in passing through a material with an index of refraction $n$. The two expressions will, in fact, be identical if \begin{equation} \label{Eq:I:31:18} (n-1)\,\Delta z=\frac{\eta q_e^2}{2\epsO m(\omega_0^2-\omega^2)}. \end{equation} Notice that both sides are proportional to $\Delta z$, since $\eta$, which is the number of atoms per unit area, is equal to $N\,\Delta z$, where $N$ is the number of atoms per unit volume of the plate. Substituting $N\,\Delta z$ for $\eta$ and cancelling the $\Delta z$, we get our main result, a formula for the index of refraction in terms of the properties of the atoms of the material—and of the frequency of the light: \begin{equation} \label{Eq:I:31:19} n=1+\frac{Nq_e^2}{2\epsO m(\omega_0^2-\omega^2)}. \end{equation} This equation gives the “explanation” of the index of refraction that we wished to obtain.
1
31
The Origin of the Refractive Index
3
Dispersion
Notice that in the above process we have obtained something very interesting. For we have not only a number for the index of refraction which can be computed from the basic atomic quantities, but we have also learned how the index of refraction should vary with the frequency $\omega$ of the light. This is something we would never understand from the simple statement that “light travels slower in a transparent material.” We still have the problem, of course, of knowing how many atoms per unit volume there are, and what is their natural frequency $\omega_0$. We do not know this just yet, because it is different for every different material, and we cannot get a general theory of that now. Formulation of a general theory of the properties of different substances—their natural frequencies, and so on—is possible only with quantum atomic mechanics. Also, different materials have different properties and different indexes, so we cannot expect, anyway, to get a general formula for the index which will apply to all substances. However, we shall discuss the formula we have obtained, in various possible circumstances. First of all, for most ordinary gases (for instance, for air, most colorless gases, hydrogen, helium, and so on) the natural frequencies of the electron oscillators correspond to ultraviolet light. These frequencies are higher than the frequencies of visible light, that is, $\omega_0$ is much larger than $\omega$ of visible light, and to a first approximation, we can disregard $\omega^2$ in comparison with $\omega_0^2$. Then we find that the index is nearly constant. So for a gas, the index is nearly constant. This is also true for most other transparent substances, like glass. If we look at our expression a little more closely, however, we notice that as $\omega$ rises, taking a little bit more away from the denominator, the index also rises. So $n$ rises slowly with frequency. The index is higher for blue light than for red light. That is the reason why a prism bends the light more in the blue than in the red. The phenomenon that the index depends upon the frequency is called the phenomenon of dispersion, because it is the basis of the fact that light is “dispersed” by a prism into a spectrum. The equation for the index of refraction as a function of frequency is called a dispersion equation. So we have obtained a dispersion equation. (In the past few years “dispersion equations” have been finding a new use in the theory of elementary particles.) Our dispersion equation suggests other interesting effects. If we have a natural frequency $\omega_0$ which lies in the visible region, or if we measure the index of refraction of a material like glass in the ultraviolet, where $\omega$ gets near $\omega_0$, we see that at frequencies very close to the natural frequency the index can get enormously large, because the denominator can go to zero. Next, suppose that $\omega$ is greater than $\omega_0$. This would occur, for example, if we take a material like glass, say, and shine x-ray radiation on it. In fact, since many materials which are opaque to visible light, like graphite for instance, are transparent to x-rays, we can also talk about the index of refraction of carbon for x-rays. All the natural frequencies of the carbon atoms would be much lower than the frequency we are using in the x-rays, since x-ray radiation has a very high frequency. The index of refraction is that given by our dispersion equation if we set $\omega_0$ equal to zero (we neglect $\omega_0^2$ in comparison with $\omega^2$). A similar situation would occur if we beam radiowaves (or light) on a gas of free electrons. In the upper atmosphere electrons are liberated from their atoms by ultraviolet light from the sun and they sit up there as free electrons. For free electrons $\omega_0 = 0$ (there is no elastic restoring force). Setting $\omega_0 = 0$ in our dispersion equation yields the correct formula for the index of refraction for radiowaves in the stratosphere, where $N$ is now to represent the density of free electrons (number per unit volume) in the stratosphere. But let us look again at the equation, if we beam x-rays on matter, or radiowaves (or any electric waves) on free electrons the term $(\omega_0^2-\omega^2)$ becomes negative, and we obtain the result that $n$ is less than one. That means that the effective speed of the waves in the substance is faster than $c$! Can that be correct? It is correct. In spite of the fact that it is said that you cannot send signals any faster than the speed of light, it is nevertheless true that the index of refraction of materials at a particular frequency can be either greater or less than $1$. This just means that the phase shift which is produced by the scattered light can be either positive or negative. It can be shown, however, that the speed at which you can send a signal is not determined by the index at one frequency, but depends on what the index is at many frequencies. What the index tells us is the speed at which the nodes (or crests) of the wave travel. The node of a wave is not a signal by itself. In a perfect wave, which has no modulations of any kind, i.e., which is a steady oscillation, you cannot really say when it “starts,” so you cannot use it for a timing signal. In order to send a signal you have to change the wave somehow, make a notch in it, make it a little bit fatter or thinner. That means that you have to have more than one frequency in the wave, and it can be shown that the speed at which signals travel is not dependent upon the index alone, but upon the way that the index changes with the frequency. This subject we must also delay (until Chapter 48). Then we will calculate for you the actual speed of signals through such a piece of glass, and you will see that it will not be faster than the speed of light, although the nodes, which are mathematical points, do travel faster than the speed of light. Just to give a slight hint as to how that happens, you will note that the real difficulty has to do with the fact that the responses of the charges are opposite to the field, i.e., the sign has gotten reversed. Thus in our expression for $x$ (Eq. 31.16) the displacement of the charge is in the direction opposite to the driving field, because $(\omega_0^2 - \omega^2)$ is negative for small $\omega_0$. The formula says that when the electric field is pulling in one direction, the charge is moving in the opposite direction. How does the charge happen to be going in the opposite direction? It certainly does not start off in the opposite direction when the field is first turned on. When the motion first starts there is a transient, which settles down after awhile, and only then is the phase of the oscillation of the charge opposite to the driving field. And it is then that the phase of the transmitted field can appear to be advanced with respect to the source wave. It is this advance in phase which is meant when we say that the “phase velocity” or velocity of the nodes is greater than $c$. In Fig. 31–4 we give a schematic idea of how the waves might look for a case where the wave is suddenly turned on (to make a signal). You will see from the diagram that the signal (i.e., the start of the wave) is not earlier for the wave which ends up with an advance in phase. Let us now look again at our dispersion equation. We should remark that our analysis of the refractive index gives a result that is somewhat simpler than you would actually find in nature. To be completely accurate we must add some refinements. First, we should expect that our model of the atomic oscillator should have some damping force (otherwise once started it would oscillate forever, and we do not expect that to happen). We have worked out before (Eq. 23.8) the motion of a damped oscillator and the result is that the denominator in Eq. (31.16), and therefore in (31.19), is changed from $(\omega_0^2 - \omega^2)$ to $(\omega_0^2 - \omega^2 + i\gamma\omega)$, where $\gamma$ is the damping coefficient. We need a second modification to take into account the fact that there are several resonant frequencies for a particular kind of atom. It is easy to fix up our dispersion equation by imagining that there are several different kinds of oscillators, but that each oscillator acts separately, and so we simply add the contributions of all the oscillators. Let us say that there are $N_k$ electrons per unit of volume, whose natural frequency is $\omega_k$ and whose damping factor is $\gamma_k$. We would then have for our dispersion equation \begin{equation} \label{Eq:I:31:20} n=1+\frac{q_e^2}{2\epsO m} \sum_k\frac{N_k}{\omega_k^2-\omega^2+i\gamma_k\omega}. \end{equation} We have, finally, a complete expression which describes the index of refraction that is observed for many substances.1 The real part of the index described by this formula varies with frequency roughly like the curve shown in Fig. 31–5(a). You will note that so long as $\omega$ is not too close to one of the resonant frequencies, the slope of the curve is positive. Such a positive slope is called “normal” dispersion (because it is clearly the most common occurrence). Very near the resonant frequencies, however, there is a small range of $\omega$’s for which the slope is negative. Such a negative slope is often referred to as “anomalous” (meaning abnormal) dispersion, because it seemed unusual when it was first observed, long before anyone even knew there were such things as electrons. From our point of view both slopes are quite “normal”!
1
31
The Origin of the Refractive Index
4
Absorption
Perhaps you have noticed something a little strange about the last form (Eq. 31.20) we obtained for our dispersion equation. Because of the term $i\gamma$ we put in to take account of damping, the index of refraction is now a complex number! What does that mean? By working out what the real and imaginary parts of $n$ are we could write \begin{equation} \label{Eq:I:31:21} n=n'-in'', \end{equation} where $n'$ and $n''$ are real numbers. (We use the minus sign in front of the $in''$ because then $n''$ will turn out to be a positive number, as you can show for yourself.) We can see what such a complex index means when there is only one resonant frequency by going back to Eq. (31.6), which is the equation of the wave after it goes through a plate of material with an index $n$. If we put our complex $n$ into this equation, and do some rearranging, we get \begin{equation} \label{Eq:I:31:22} E_{\text{after plate}}= \underbrace{\vphantom{E_0}e^{-\omega n''\,\Delta z/c}}_{\text{A}} \underbrace{e^{-i\omega(n'-1)\,\Delta z/c} E_0e^{i\omega(t-z/c)}}_{\text{B}}. \end{equation} The last factors, marked B in Eq. (31.22), are just the form we had before, and again describe a wave whose phase has been delayed by the angle $\omega(n' - 1)\,\Delta z/c$ in traversing the material. The first term (A) is new and is an exponential factor with a real exponent, because there were two $i$’s that cancelled. Also, the exponent is negative, so the factor is a real number less than one. It describes a decrease in the magnitude of the field and, as we should expect, by an amount which is more the larger $\Delta z$ is. As the wave goes through the material, it is weakened. The material is “absorbing” part of the wave. The wave comes out the other side with less energy. We should not be surprised at this, because the damping we put in for the oscillators is indeed a friction force and must be expected to cause a loss of energy. We see that the imaginary part $n''$ of a complex index of refraction represents an absorption (or “attenuation”) of the wave. In fact, $n''$ is sometimes referred to as the “absorption index.” We may also point out that an imaginary part to the index $n$ corresponds to bending the arrow $E_a$ in Fig. 31–3 toward the origin. It is clear why the transmitted field is then decreased. Normally, for instance as in glass, the absorption of light is very small. This is to be expected from our Eq. (31.20), because the imaginary part of the denominator, $i\gamma_k\omega$, is much smaller than the term $(\omega_k^2 - \omega^2)$. But if the light frequency $\omega$ is very close to $\omega_k$ then the resonance term $(\omega_k^2 - \omega^2)$ can become small compared with $i\gamma_k\omega$ and the index becomes almost completely imaginary, as shown in Fig. 31–5(b). The absorption of the light becomes the dominant effect. It is just this effect that gives the dark lines in the spectrum of light which we receive from the sun. The light from the solar surface has passed through the sun’s atmosphere (as well as the earth’s), and the light has been strongly absorbed at the resonant frequencies of the atoms in the solar atmosphere. The observation of such spectral lines in the sunlight allows us to tell the resonant frequencies of the atoms and hence the chemical composition of the sun’s atmosphere. The same kind of observations tell us about the materials in the stars. From such measurements we know that the chemical elements in the sun and in the stars are the same as those we find on the earth.
1
31
The Origin of the Refractive Index
5
The energy carried by an electric wave
We have seen that the imaginary part of the index means absorption. We shall now use this knowledge to find out how much energy is carried by a light wave. We have given earlier an argument that the energy carried by light is proportional to $\overline{E^2}$, the time average of the square of the electric field in the wave. The decrease in $E$ due to absorption must mean a loss of energy, which would go into some friction of the electrons and, we might guess, would end up as heat in the material. If we consider the light arriving on a unit area, say one square centimeter, of our plate in Fig. 31–1, then we can write the following energy equation (if we assume that energy is conserved, as we do!): \begin{equation} \label{Eq:I:31:23} \text{Energy in per sec} = \text{energy out per sec} + \text{work done per sec}. \end{equation} \begin{equation} \begin{gathered} \text{Energy in per sec} =\\[1ex] \text{energy out per sec} + \text{work done per sec}. \end{gathered} \label{Eq:I:31:23} \end{equation} For the first term we can write $\alpha\overline{E_s^2}$, where $\alpha$ is the as yet unknown constant of proportionality which relates the average value of $E^2$ to the energy being carried. For the second term we must include the part from the radiating atoms of the material, so we should use $\alpha\overline{(E_s+E_a)^2}$, or (evaluating the square) $\alpha(\overline{E_s^2} + 2\overline{E_sE_a} + \overline{E_a^2})$. All of our calculations have been made for a thin layer of material whose index is not too far from $1$, so that $E_a$ would always be much less than $E_s$ (just to make the calculations easier). In keeping with our approximations, we should, therefore, leave out the term $\overline{E_a^2}$, because it is much smaller than $\overline{E_sE_a}$. You may say: “Then you should leave out $\overline{E_sE_a}$ also, because it is much smaller than $\overline{E_s^2}$.” It is true that $\overline{E_sE_a}$ is much smaller than $\overline{E_s^2}$, but we must keep $\overline{E_sE_a}$ or our approximation will be the one that would apply if we neglected the presence of the material completely! One way of checking that our calculations are consistent is to see that we always keep terms which are proportional to $N\,\Delta z$, the area density of atoms in the material, but we leave out terms which are proportional to $(N\,\Delta z)^2$ or any higher power of $N\,\Delta z$. Ours is what should be called a “low-density approximation.” In the same spirit, we might remark that our energy equation has neglected the energy in the reflected wave. But that is OK because this term, too, is proportional to $(N\,\Delta z)^2$, since the amplitude of the reflected wave is proportional to $N\,\Delta z$. For the last term in Eq. (31.23) we wish to compute the rate at which the incoming wave is doing work on the electrons. We know that work is force times distance, so the rate of doing work (also called power) is the force times the velocity. It is really $\FLPF\cdot\FLPv$, but we do not need to worry about the dot product when the velocity and force are along the same direction as they are here (except for a possible minus sign). So for each atom we take $\overline{q_eE_sv}$ for the average rate of doing work. Since there are $N\,\Delta z$ atoms in a unit area, the last term in Eq. (31.23) should be $N\,\Delta z\,q_e\overline{E_sv}$. Our energy equation now looks like \begin{equation} \label{Eq:I:31:24} \alpha\overline{E_s^2}=\alpha\overline{E_s^2}+ 2\alpha\overline{E_sE_a}+ N\,\Delta z\,q_e\overline{E_sv}. \end{equation} The $\overline{E_s^2}$ terms cancel, and we have \begin{equation} \label{Eq:I:31:25} 2\alpha\overline{E_sE_a}= -N\,\Delta z\,q_e\overline{E_sv}. \end{equation} We now go back to Eq. (30.19), which tells us that for large $z$ \begin{equation} \label{Eq:I:31:26} E_a=-\frac{N\,\Delta z\,q_e}{2\epsO c}\,v(\text{ret by $z/c$}) \end{equation} (recalling that $\eta=N\,\Delta z$). Putting Eq. (31.26) into the left-hand side of (31.25), we get \begin{equation*} -2\alpha\,\frac{N\,\Delta z\,q_e}{2\epsO c}\, \overline{E_s(\text{at $z$})\cdot v(\text{ret by $z/c$})}. \end{equation*} However, $E_s(\text{at \(z\)})$ is $E_s(\text{at atoms})$ retarded by $z/c$. Since the average is independent of time, it is the same now as retarded by $z/c$, or is $\overline{E_s(\text{at atoms})\cdot v}$, the same average that appears on the right-hand side of (31.25). The two sides are therefore equal if \begin{equation} \label{Eq:I:31:27} \frac{\alpha}{\epsO c}=1,\quad \text{or}\quad \alpha=\epsO c. \end{equation} We have discovered that if energy is to be conserved, the energy carried in an electric wave per unit area and per unit time (or what we have called the intensity) must be given by $\epsO c\overline{E^2}$. If we call the intensity $\overline{S}$, we have \begin{equation} \label{Eq:I:31:28} \overline{S}= \begin{Bmatrix} \text{intensity}\\ \text{or}\\ \text{energy/area/time} \end{Bmatrix} =\epsO c\overline{E^2}, \end{equation} where the bar means the time average. We have a nice bonus result from our theory of the refractive index!
1
31
The Origin of the Refractive Index
6
Diffraction of light by a screen
It is now a good time to take up a somewhat different matter which we can handle with the machinery of this chapter. In the last chapter we said that when you have an opaque screen and the light can come through some holes, the distribution of intensity—the diffraction pattern—could be obtained by imagining instead that the holes are replaced by sources (oscillators) uniformly distributed over the hole. In other words, the diffracted wave is the same as though the hole were a new source. We have to explain the reason for that, because the hole is, of course, just where there are no sources, where there are no accelerating charges. Let us first ask: “What is an opaque screen?” Suppose we have a completely opaque screen between a source $S$ and an observer at $P$, as in Fig. 31–6(a). If the screen is “opaque” there is no field at $P$. Why is there no field there? According to the basic principles we should obtain the field at $P$ as the field $E_s$ of the source delayed, plus the field from all the other charges around. But, as we have seen above, the charges in the screen will be set in motion by the field $E_s$, and these motions generate a new field which, if the screen is opaque, must exactly cancel the field $E_s$ on the back side of the screen. You say: “What a miracle that it balances exactly! Suppose it was not exactly right!” If it were not exactly right (remember that this opaque screen has some thickness), the field toward the rear part of the screen would not be exactly zero. So, not being zero, it would set into motion some other charges in the material of the screen, and thus make a little more field, trying to get the total balanced out. So if we make the screen thick enough, there is no residual field, because there is enough opportunity to finally get the thing quieted down. In terms of our formulas above we would say that the screen has a large and imaginary index, so the wave is absorbed exponentially as it goes through. You know, of course, that a thin enough sheet of the most opaque material, even gold, is transparent. Now let us see what happens with an opaque screen which has holes in it, as in Fig. 31–6(b). What do we expect for the field at $P$? The field at $P$ can be represented as a sum of two parts—the field due to the source $S$ plus the field due to the wall, i.e., due to the motions of the charges in the walls. We might expect the motions of the charges in the walls to be complicated, but we can find out what fields they produce in a rather simple way. Suppose that we were to take the same screen, but plug up the holes, as indicated in part (c) of the figure. We imagine that the plugs are of exactly the same material as the wall. Mind you, the plugs go where the holes were in case (b). Now let us calculate the field at $P$. The field at $P$ is certainly zero in case (c), but it is also equal to the field from the source plus the field due to all the motions of the atoms in the walls and in the plugs. We can write the following equations: \begin{alignat*}{3} &\textit{Case (b):}&& \quad &E_{\text{at $P$}}&\;= E_s + E_{\text{wall}},\\[1ex] &\textit{Case (c):}&& \quad &E_{\text{at $P$}}'&\;= 0 = E_s + E_{\text{wall}}' + E_{\text{plug}}', \end{alignat*} where the primes refer to the case where the plugs are in place, but $E_s$ is, of course, the same in both cases. Now if we subtract the two equations, we get \begin{equation*} E_{\text{at $P$}}=(E_{\text{wall}}-E_{\text{wall}}')-E_{\text{plug}}'. \end{equation*} Now if the holes are not too small (say many wavelengths across), we would not expect the presence of the plugs to change the fields which arrive at the walls except possibly for a little bit around the edges of the holes. Neglecting this small effect, we can set $E_{\text{wall}}=E_{\text{wall}}'$ and obtain that \begin{equation*} E_{\text{at $P$}}=-E_{\text{plug}}'. \end{equation*} We have the result that the field at $P$ when there are holes in a screen (case b) is the same (except for sign) as the field that is produced by that part of a complete opaque wall which is located where the holes are! (The sign is not too interesting, since we are usually interested in intensity which is proportional to the square of the field.) It seems like an amazing backwards-forwards argument. It is, however, not only true (approximately for not too small holes), but useful, and is the justification for the usual theory of diffraction. The field $E_{\text{plug}}'$ is computed in any particular case by remembering that the motion of the charges everywhere in the screen is just that which will cancel out the field $E_s$ on the back of the screen. Once we know these motions, we add the radiation fields at $P$ due just to the charges in the plugs. We remark again that this theory of diffraction is only approximate, and will be good only if the holes are not too small. For holes which are too small the $E_{\text{plug}}'$ term will be small and then the difference between $E_{\text{wall}}'$ and $E_{\text{wall}}$ (which difference we have taken to be zero) may be comparable to or larger than the small $E_{\text{plug}}'$ term, and our approximation will no longer be valid.
1
32
Radiation Damping. Light Scattering
1
Radiation resistance
In the last chapter we learned that when a system is oscillating, energy is carried away, and we deduced a formula for the energy which is radiated by an oscillating system. If we know the electric field, then the average of the square of the field times $\epsO c$ is the amount of energy that passes per square meter per second through a surface normal to the direction in which the radiation is going: \begin{equation} \label{Eq:I:32:1} S=\epsO c\avg{E^2}. \end{equation} Any oscillating charge radiates energy; for instance, a driven antenna radiates energy. If the system radiates energy, then in order to account for the conservation of energy we must find that power is being delivered along the wires which lead into the antenna. That is, to the driving circuit the antenna acts like a resistance, or a place where energy can be “lost” (the energy is not really lost, it is really radiated out, but so far as the circuit is concerned, the energy is lost). In an ordinary resistance, the energy which is “lost” passes into heat; in this case the energy which is “lost” goes out into space. But from the standpoint of circuit theory, without considering where the energy goes, the net effect on the circuit is the same—energy is “lost” from that circuit. Therefore the antenna appears to the generator as having a resistance, even though it may be made with perfectly good copper. In fact, if it is well built it will appear as almost a pure resistance, with very little inductance or capacitance, because we would like to radiate as much energy as possible out of the antenna. This resistance that an antenna shows is called the radiation resistance. If a current $I$ is going to the antenna, then the average rate at which power is delivered to the antenna is the average of the square of the current times the resistance. The rate at which power is radiated by the antenna is proportional to the square of the current in the antenna, of course, because all the fields are proportional to the currents, and the energy liberated is proportional to the square of the field. The coefficient of proportionality between radiated power and $\avg{I^2}$ is the radiation resistance. An interesting question is, what is this radiation resistance due to? Let us take a simple example: let us say that currents are driven up and down in an antenna. We find that we have to put work in, if the antenna is to radiate energy. If we take a charged body and accelerate it up and down it radiates energy; if it were not charged it would not radiate energy. It is one thing to calculate from the conservation of energy that energy is lost, but another thing to answer the question, against what force are we doing the work? That is an interesting and very difficult question which has never been completely and satisfactorily answered for electrons, although it has been for antennas. What happens is this: in an antenna, the fields produced by the moving charges in one part of the antenna react on the moving charges in another part of the antenna. We can calculate these forces and find out how much work they do, and so find the right rule for the radiation resistance. When we say “We can calculate—” that is not quite right—we cannot, because we have not yet studied the laws of electricity at short distances; only at large distances do we know what the electric field is. We saw the formula (28.3), but at present it is too complicated for us to calculate the fields inside the wave zone. Of course, since conservation of energy is valid, we can calculate the result all right without knowing the fields at short distances. (As a matter of fact, by using this argument backwards it turns out that one can find the formula for the forces at short distances only by knowing the field at very large distances, by using the laws of conservation of energy, but we shall not go into that here.) The problem in the case of a single electron is this: if there is only one charge, what can the force act on? It has been proposed, in the old classical theory, that the charge was a little ball, and that one part of the charge acted on the other part. Because of the delay in the action across the tiny electron, the force is not exactly in phase with the motion. That is, if we have the electron standing still, we know that “action equals reaction.” So the various internal forces are equal, and there is no net force. But if the electron is accelerating, then because of the time delay across it, the force which is acting on the front from the back is not exactly the same as the force on the back from the front, because of the delay in the effect. This delay in the timing makes for a lack of balance, so, as a net effect, the thing holds itself back by its bootstraps! This model of the origin of the resistance to acceleration, the radiation resistance of a moving charge, has run into many difficulties, because our present view of the electron is that it is not a “little ball”; this problem has never been solved. Nevertheless we can calculate exactly, of course, what the net radiation resistance force must be, i.e., how much loss there must be when we accelerate a charge, in spite of not knowing directly the mechanism of how that force works.
1
32
Radiation Damping. Light Scattering
2
The rate of radiation of energy
Now we shall calculate the total energy radiated by an accelerating charge. To keep the discussion general, we shall take the case of a charge accelerating any which way, but nonrelativistically. At a moment when the acceleration is, say, vertical, we know that the electric field that is generated is the charge multiplied by the projection of the retarded acceleration, divided by the distance. So we know the electric field at any point, and we therefore know the square of the electric field and thus the energy $\epsO cE^2$ leaving through a unit area per second. The quantity $\epsO c$ appears quite often in expressions involving radiowave propagation. Its reciprocal is called the impedance of a vacuum, and it is an easy number to remember: it has the value $1/\epsO c = 377$ ohms. So the power in watts per square meter is equal to the average of the field squared, divided by $377$. Using our expression (29.1) for the electric field, we find that \begin{equation} \label{Eq:I:32:2} S=\frac{q^2a'^2\sin^2\theta}{16\pi^2\epsO r^2c^3} \end{equation} is the power per square meter radiated in the direction $\theta$. We notice that it goes inversely as the square of the distance, as we said before. Now suppose we wanted the total energy radiated in all directions: then we must integrate (32.2) over all directions. First we multiply by the area, to find the amount that flows within a little angle $d\theta$ (Fig. 32–1). We need the area of a spherical section. The way to think of it is this: if $r$ is the radius, then the width of the annular segment is $r\,d\theta$, and the circumference is $2\pi r \sin \theta$, because $r\sin\theta$ is the radius of the circle. So the area of the little piece of the sphere is $2\pi r \sin \theta$ times $r\,d\theta$: \begin{equation} \label{Eq:I:32:3} dA=2\pi r^2\sin\theta\,d\theta. \end{equation} By multiplying the flux [(32.2), the power per square meter] by the area in square meters included in the small angle $d\theta$, we find the amount of energy that is liberated in this direction between $\theta$ and $\theta + d\theta$; then we integrate that over all the angles $\theta$ from $0$ to $180^\circ$: \begin{equation} \label{Eq:I:32:4} P=\int S\,dA=\frac{q^2a'^2}{8\pi\epsO c^3} \int_0^\pi\sin^3\theta\,d\theta. \end{equation} By writing $\sin^3 \theta=(1 - \cos^2\theta)\sin\theta$ it is not hard to show that $\int_0^\pi\sin^3\theta\,d\theta=4/3$. Using that fact, we finally get \begin{equation} \label{Eq:I:32:5} P=\frac{q^2a'^2}{6\pi\epsO c^3}. \end{equation} This expression deserves some remarks. First of all, since the vector $\FLPa'$ had a certain direction, the $a'^2$ in (32.5) would be the square of the vector $\FLPa'$, that is, $\FLPa'\cdot\FLPa'$, the length of the vector, squared. Secondly, the flux (32.2) was calculated using the retarded acceleration; that is, the acceleration at the time at which the energy now passing through the sphere was radiated. We might like to say that this energy was in fact liberated at this earlier time. This is not exactly true; it is only an approximate idea. The exact time when the energy is liberated cannot be defined precisely. All we can really calculate precisely is what happens in a complete motion, like an oscillation or something, where the acceleration finally ceases. Then what we find is that the total energy flux per cycle is the average of acceleration squared, for a complete cycle. This is what should really appear in (32.5). Or, if it is a motion with an acceleration that is initially and finally zero, then the total energy that has flown out is the time integral of (32.5). To illustrate the consequences of formula (32.5) when we have an oscillating system, let us see what happens if the displacement $x$ of the charge is oscillating so that the acceleration $a$ is $-\omega^2x_0e^{i\omega t}$. The average of the acceleration squared over a cycle (remember that we have to be very careful when we square things that are written in complex notation—it really is the cosine, and the average of $\cos^2\omega t$ is one-half) thus is \begin{equation} \avg{a'^2} = \tfrac{1}{2}\omega^4x_0^2.\notag \end{equation} Therefore \begin{equation} \label{Eq:I:32:6} P=\frac{q^2\omega^4x_0^2}{12\pi\epsO c^3}. \end{equation} The formulas we are now discussing are relatively advanced and more or less modern; they date from the beginning of the twentieth century, and they are very famous. Because of their historical value, it is important for us to be able to read about them in older books. In fact, the older books also used a system of units different from our present mks system. However, all these complications can be straightened out in the final formulas dealing with electrons by the following rule: The quantity $q_e^2/4\pi\epsO$, where $q_e$ is the electronic charge (in coulombs), has, historically, been written as $e^2$. It is very easy to calculate that $e$ in the mks system is numerically equal to $1.5188\times10^{-14}$, because we know that, numerically, $q_e = 1.60206\times10^{-19}$ and $1/4\pi\epsO = 8.98748\times10^9$. Therefore we shall often use the convenient abbreviation \begin{equation} \label{Eq:I:32:7} e^2=\frac{q_e^2}{4\pi\epsO}. \end{equation} If we use the above numerical value of $e$ in the older formulas and treat them as though they were written in mks units, we will get the right numerical results. For example, the older form of (32.5) is $P = \tfrac{2}{3}e^2a'^2/c^3$. Again, the potential energy of a proton and an electron at distance $r$ is $q_e^2/4\pi\epsO r$ or $e^2/r$, with $e = 1.5188\times10^{-14}$ (mks).
1
32
Radiation Damping. Light Scattering
3
Radiation damping
Now the fact that an oscillator loses a certain energy would mean that if we had a charge on the end of a spring (or an electron in an atom) which has a natural frequency $\omega_0$, and we start it oscillating and let it go, it will not oscillate forever, even if it is in empty space millions of miles from anything. There is no oil, no resistance, in an ordinary sense; no “viscosity.” But nevertheless it will not oscillate, as we might once have said, “forever,” because if it is charged it is radiating energy, and therefore the oscillation will slowly die out. How slowly? What is the $Q$ of such an oscillator, caused by the electromagnetic effects, the so-called radiation resistance or radiation damping of the oscillator? The $Q$ of any oscillating system is the total energy content of the oscillator at any time divided by the energy loss per radian: \begin{equation*} Q=-\frac{W}{dW/d\phi}. \end{equation*} Or (another way to write it), since $dW/d\phi =$ $(dW/dt)/(d\phi/dt) =$ $(dW/dt)/\omega$, \begin{equation} \label{Eq:I:32:8} Q=-\frac{\omega W}{dW/dt}. \end{equation} If for a given $Q$ this tells us how the energy of the oscillation dies out, $dW/dt = -(\omega/Q)W$, which has the solution $W = W_0e^{-\omega t/Q}$ if $W_0$ is the initial energy (at $t = 0$). To find the $Q$ for a radiator, we go back to (32.8) and use (32.6) for $dW/dt$. Now what do we use for the energy $W$ of the oscillator? The kinetic energy of the oscillator is $\tfrac{1}{2}mv^2$, and the mean kinetic energy is $m\omega^2x_0^2/4$. But we remember that for the total energy of an oscillator, on the average half is kinetic and half is potential energy, and so we double our result, and find for the total energy of the oscillator \begin{equation} \label{Eq:I:32:9} W=\tfrac{1}{2}m\omega^2x_0^2. \end{equation} What do we use for the frequency in our formulas? We use the natural frequency $\omega_0$ because, for all practical purposes, that is the frequency at which our atom is radiating, and for $m$ we use the electron mass $m_e$. Then, making the necessary divisions and cancellations, the formula comes down to \begin{equation} \label{Eq:I:32:10} \frac{1}{Q}=\frac{4\pi e^2}{3\lambda m_ec^2}. \end{equation} (In order to see it better and in a more historical form we write it using our abbreviation $q_e^2/4\pi\epsO = e^2$, and the factor $\omega_0/c$ which was left over has been written as $2\pi/\lambda$.) Since $Q$ is dimensionless, the combination $e^2/m_ec^2$ must be a property only of the electron charge and mass, an intrinsic property of the electron, and it must be a length. It has been given a name, the classical electron radius, because the early atomic models, which were invented to explain the radiation resistance on the basis of the force of one part of the electron acting on the other parts, all needed to have an electron whose dimensions were of this general order of magnitude. However, this quantity no longer has the significance that we believe that the electron really has such a radius. Numerically, the magnitude of the radius is \begin{equation} \label{Eq:I:32:11} r_0=\frac{e^2}{m_ec^2}=2.82\times10^{-15}\text{ m}. \end{equation} Now let us actually calculate the $Q$ of an atom that is emitting light—let us say a sodium atom. For a sodium atom, the wavelength is roughly $6000$ angstroms, in the yellow part of the visible spectrum, and this is a typical wavelength. Thus \begin{equation} \label{Eq:I:32:12} Q=\frac{3\lambda}{4\pi r_0}\approx 5\times10^7, \end{equation} so the $Q$ of an atom is of the order $10^8$. This means that an atomic oscillator will oscillate for $10^8$ radians or about $10^7$ oscillations, before its energy falls by a factor $1/e$. The frequency of oscillation of light corresponding to $6000$ angstroms, $\nu = c/\lambda$, is on the order of $10^{15}$ cycles/sec, and therefore the lifetime, the time it takes for the energy of a radiating atom to die out by a factor $1/e$, is on the order of $10^{-8}$ sec. In ordinary circumstances, freely emitting atoms usually take about this long to radiate. This is valid only for atoms which are in empty space, not being disturbed in any way. If the electron is in a solid and it has to hit other atoms or other electrons, then there are additional resistances and different damping. The effective resistance term $\gamma$ in the resistance law for the oscillator can be found from the relation $1/Q = \gamma/\omega_0$, and we remember that the size of $\gamma$ determines how wide the resonance curve is (Fig. 23–2). Thus we have just computed the widths of spectral lines for freely radiating atoms! Since $\lambda = 2\pi c/\omega$, we find that \begin{align} \notag \Delta\lambda &=2\pi c\,\Delta\omega/\omega^2= 2\pi c\gamma/\omega_0^2=2\pi c/Q\omega_0\\[1ex] \label{Eq:I:32:13} &=\lambda/Q=4\pi r_0/3=1.18\times10^{-14}\text{ m}. \end{align}
1
32
Radiation Damping. Light Scattering
4
Independent sources
In preparation for our second topic, the scattering of light, we must now discuss a certain feature of the phenomenon of interference that we neglected to discuss previously. This is the question of when interference does not occur. If we have two sources $S_1$ and $S_2$, with amplitudes $A_1$ and $A_2$, and we make an observation in a certain direction in which the phases of arrival of the two signals are $\phi_1$ and $\phi_2$ (a combination of the actual time of oscillation and the delayed time, depending on the position of observation), then the energy that we receive can be found by compounding the two complex number vectors $A_1$ and $A_2$, one at angle $\phi_1$ and the other at angle $\phi_2$ (as we did in Chapter 29) and we find that the resultant energy is proportional to \begin{equation} \label{Eq:I:32:14} A_R^2=A_1^2+A_2^2+2A_1A_2\cos\,(\phi_1-\phi_2). \end{equation} Now if the cross term $2A_1A_2\cos\,(\phi_1-\phi_2)$ were not there, then the total energy that would be received in a given direction would simply be the sum of the energies, $A_1^2 + A_2^2$, that would be liberated by each source separately, which is what we usually expect. That is, the combined intensity of light shining on something from two sources is the sum of the intensities of the two lights. On the other hand, if we have things set just right and we have a cross term, it is not such a sum, because there is also some interference. If there are circumstances in which this term is of no importance, then we would say the interference is apparently lost. Of course, in nature it is always there, but we may not be able to detect it. Let us consider some examples. Suppose, first, that the two sources are $7{,}000{,}000{,}000$ wavelengths apart, not an impossible arrangement. Then in a given direction it is true that there is a very definite value of these phase differences. But, on the other hand, if we move just a hair in one direction, a few wavelengths, which is no distance at all (our eye already has a hole in it that is so large that we are averaging the effects over a range very wide compared with one wavelength) then we change the relative phase, and the cosine changes very rapidly. If we take the average of the intensity over a little region, then the cosine, which goes plus, minus, plus, minus, as we move around, averages to zero. So if we average over regions where the phase varies very rapidly with position, we get no interference. Another example. Suppose that the two sources are two independent radio oscillators—not a single oscillator being fed by two wires, which guarantees that the phases are kept together, but two independent sources—and that they are not precisely tuned at the same frequency (it is very hard to make them at exactly the same frequency without actually wiring them together). In this case we have what we call two independent sources. Of course, since the frequencies are not exactly equal, although they started in phase, one of them begins to get a little ahead of the other, and pretty soon they are out of phase, and then it gets still further ahead, and pretty soon they are in phase again. So the phase difference between the two is gradually drifting with time, but if our observation is so crude that we cannot see that little time, if we average over a much longer time, then although the intensity swells and falls like what we call “beats” in sound, if these swellings and fallings are too rapid for our equipment to follow, then again this term averages out. In other words, in any circumstance in which the phase shift averages out, we get no interference! One finds many books which say that two distinct light sources never interfere. This is not a statement of physics, but is merely a statement of the degree of sensitivity of the technique of the experiments at the time the book was written. What happens in a light source is that first one atom radiates, then another atom radiates, and so forth, and we have just seen that atoms radiate a train of waves only for about $10^{-8}$ sec; after $10^{-8}$ sec, some atom has probably taken over, then another atom takes over, and so on. So the phases can really only stay the same for about $10^{-8}$ sec. Therefore, if we average for very much more than $10^{-8}$ sec, we do not see an interference from two different sources, because they cannot hold their phases steady for longer than $10^{-8}$ sec. With photocells, very high-speed detection is possible, and one can show that there is an interference which varies with time, up and down, in about $10^{-8}$ sec. But most detection equipment, of course, does not look at such fine time intervals, and thus sees no interference. Certainly with the eye, which has a tenth-of-a-second averaging time, there is no chance whatever of seeing an interference between two different ordinary sources. Recently it has become possible to make light sources which get around this effect by making all the atoms emit together in time. The device which does this is a very complicated thing, and has to be understood in a quantum-mechanical way. It is called a laser, and it is possible to produce from a laser a source in which the time during which the phase is kept constant, is very much longer than $10^{-8}$ sec. It can be of the order of a hundredth, a tenth, or even one second, and so, with ordinary photocells, one can pick up the frequency between two different lasers. One can easily detect the pulsing of the beats between two laser sources. Soon, no doubt, someone will be able to demonstrate two sources shining on a wall, in which the beats are so slow that one can see the wall get bright and dark! Another case in which the interference averages out is that in which, instead of having only two sources, we have many. In this case, we would write the expression for $A_R^2$ as the sum of a whole lot of amplitudes, complex numbers, squared, and we would get the square of each one, all added together, plus cross terms between every pair, and if the circumstances are such that the latter average out, then there will be no effects of interference. It may be that the various sources are located in such random positions that, although the phase difference between $A_2$ and $A_3$ is also definite, it is very different from that between $A_1$ and $A_2$, etc. So we would get a whole lot of cosines, many plus, many minus, all averaging out. So it is that in many circumstances we do not see the effects of interference, but see only a collective, total intensity equal to the sum of all the intensities.
1
32
Radiation Damping. Light Scattering
5
Scattering of light
The above leads us to an effect which occurs in air as a consequence of the irregular positions of the atoms. When we were discussing the index of refraction, we saw that an incoming beam of light will make the atoms radiate again. The electric field of the incoming beam drives the electrons up and down, and they radiate because of their acceleration. This scattered radiation combines to give a beam in the same direction as the incoming beam, but of somewhat different phase, and this is the origin of the index of refraction. But what can we say about the amount of re-radiated light in some other direction? Ordinarily, if the atoms are very beautifully located in a nice pattern, it is easy to show that we get nothing in other directions, because we are adding a lot of vectors with their phases always changing, and the result comes to zero. But if the objects are randomly located, then the total intensity in any direction is the sum of the intensities that are scattered by each atom, as we have just discussed. Furthermore, the atoms in a gas are in actual motion, so that although the relative phase of two atoms is a definite amount now, later the phase would be quite different, and therefore each cosine term will average out. Therefore, to find out how much light is scattered in a given direction by a gas, we merely study the effects of one atom and multiply the intensity it radiates by the number of atoms. Earlier, we remarked that the phenomenon of scattering of light of this nature is the origin of the blue of the sky. The sunlight goes through the air, and when we look to one side of the sun—say at $90^\circ$ to the beam—we see blue light; what we now have to calculate is how much light we see and why it is blue. If the incident beam has the electric field1 $\FLPE = \hat{\FLPE}_0e^{i\omega t}$ at the point where the atom is located, we know that an electron in the atom will vibrate up and down in response to this $\FLPE$ (Fig. 32–2). From Eq. (23.8), the response will be \begin{equation} \label{Eq:I:32:15} \hat{\FLPx}=\frac{q_e\hat{\FLPE}_0}{m(\omega_0^2-\omega^2+i\gamma\omega)}. \end{equation} We could include the damping and the possibility that the atom acts like several oscillators of different frequency and sum over the various frequencies, but for simplicity let us just take one oscillator and neglect the damping. Then the response to the external electric field, which we have already used in the calculation of the index of refraction, is simply \begin{equation} \label{Eq:I:32:16} \hat{\FLPx}=\frac{q_e\hat{\FLPE}_0}{m(\omega_0^2-\omega^2)}. \end{equation} We could now easily calculate the intensity of light that is emitted in various directions, using formula (32.2) and the acceleration corresponding to the above $\hat{\FLPx}$. Rather than do this, however, we shall simply calculate the total amount of light scattered in all directions, just to save time. The total amount of light energy per second, scattered in all directions by the single atom, is of course given by Eq. (32.6). So, putting together the various pieces and regrouping them, we get \begin{align} P&=[(q_e^2\omega^4/12\pi\epsO c^3)q_e^2E_0^2/m_e^2 (\omega^2\!-\omega_0^2)^2]\notag\\[1ex] &=(\tfrac{1}{2}\epsO cE_0^2)(8\pi/3)(q_e^4/16\pi^2\epsO^2m_e^2c^4) [\omega^4\!/(\omega^2\!-\omega_0^2)^2]\notag\\[1ex] \label{Eq:I:32:17} &=(\tfrac{1}{2}\epsO cE_0^2)(8\pi r_0^2/3) [\omega^4\!/(\omega^2\!-\omega_0^2)^2] \end{align} for the total scattered power, radiated in all directions. We have written the result in the above form because it is then easy to remember: First, the total energy that is scattered is proportional to the square of the incident field. What does that mean? Obviously, the square of the incident field is proportional to the energy which is coming in per second. In fact, the energy incident per square meter per second is $\epsO c$ times the average $\avg{E^2}$ of the square of the electric field, and if $E_0$ is the maximum value of $E$, then $\avg{E^2} = \tfrac{1}{2}E_0^2$. In other words, the total energy scattered is proportional to the energy per square meter that comes in; the brighter the sunlight that is shining in the sky, the brighter the sky is going to look. Next, what fraction of the incoming light is scattered? Let us imagine a “target” with a certain area, let us say $\sigma$, in the beam (not a real, material target, because this would diffract light, and so on; we mean an imaginary area drawn in space). The total amount of energy that would pass through this surface $\sigma$ in a given circumstance is proportional both to the incoming intensity and to $\sigma$, and the total power would be \begin{equation} \label{Eq:I:32:18} P=(\tfrac{1}{2}\epsO cE_0^2)\sigma. \end{equation} Now we invent an idea: we say that the atom scatters a total amount of intensity which is the amount which would fall on a certain geometrical area, and we give the answer by giving that area. That answer, then, is independent of the incident intensity; it gives the ratio of the energy scattered to the energy incident per square meter. In other words, the ratio \begin{equation*} \frac{\text{total energy scattered per second}} {\text{energy incident per square meter per second}} \text{ is an area}. \end{equation*} The significance of this area is that, if all the energy that impinged on that area were to be spewed in all directions, then that is the amount of energy that would be scattered by the atom. This area is called a cross section for scattering; the idea of cross section is used constantly, whenever some phenomenon occurs in proportion to the intensity of a beam. In such cases one always describes the amount of the phenomenon by saying what the effective area would have to be to pick up that much of the beam. It does not mean in any way that this oscillator actually has such an area. If there were nothing present but a free electron shaking up and down there would be no area directly associated with it, physically. It is merely a way of expressing the answer to a certain kind of problem; it tells us what area the incident beam would have to hit in order to account for that much energy coming off. Thus, for our case, \begin{equation} \label{Eq:I:32:19} \sigma_s=\frac{8\pi r_0^2}{3}\cdot \frac{\omega^4}{(\omega^2-\omega_0^2)^2} \end{equation} (the subscript $s$ is for “scattering”). Let us look at some examples. First, if we go to a very low natural frequency $\omega_0$, or to completely unbound electrons, for which $\omega_0=0$, then the frequency $\omega$ cancels out and the cross section is a constant. This low-frequency limit, or the free electron cross section, is known as the Thomson scattering cross section. It is an area whose dimensions are approximately $10^{-15}$ meter, more or less, on a side, i.e., $10^{-30}$ square meter, which is rather small! On the other hand, if we take the case of light in the air, we remember that for air the natural frequencies of the oscillators are higher than the frequency of the light that we use. This means that, to a first approximation, we can disregard $\omega^2$ in the denominator, and we find that the scattering is proportional to the fourth power of the frequency. That is to say, light which is of higher frequency by, say, a factor of two, is sixteen times more intensely scattered, which is a quite sizable difference. This means that blue light, which has about twice the frequency of the reddish end of the spectrum, is scattered to a far greater extent than red light. Thus when we look at the sky it looks that glorious blue that we see all the time! There are several points to be made about the above results. One interesting question is, why do we ever see the clouds? Where do the clouds come from? Everybody knows it is the condensation of water vapor. But, of course, the water vapor is already in the atmosphere before it condenses, so why don’t we see it then? After it condenses it is perfectly obvious. It wasn’t there, now it is there. So the mystery of where the clouds come from is not really such a childish mystery as “Where does the water come from, Daddy?,” but has to be explained. We have just explained that every atom scatters light, and of course the water vapor will scatter light, too. The mystery is why, when the water is condensed into clouds, does it scatter such a tremendously greater amount of light? Consider what would happen if, instead of a single atom, we had an agglomerate of atoms, say two, very close together compared with the wavelength of the light. Remember, atoms are only an angstrom or so across, while the wavelength of light is some $5000$ angstroms, so when they form a clump, a few atoms together, they can be very close together compared with the wavelength of light. Then when the electric field acts, both of the atoms will move together. The electric field that is scattered will then be the sum of the two electric fields in phase, i.e., double the amplitude that there was with a single atom, and the energy which is scattered is therefore four times what it is with a single atom, not twice! So lumps of atoms radiate or scatter more energy than they do as single atoms. Our argument that the phases are independent is based on the assumption that there is a real and large difference in phase between any two atoms, which is true only if they are several wavelengths apart and randomly spaced, or moving. But if they are right next to each other, they necessarily scatter in phase, and they have a coherent interference which produces an increase in the scattering. If we have $N$ atoms in a lump, which is a tiny droplet of water, then each one will be driven by the electric field in about the same way as before (the effect of one atom on the other is not important; it is just to get the idea anyway) and the amplitude of scattering from each one is the same, so the total field which is scattered is $N$-fold increased. The intensity of the light which is scattered is then the square, or $N^2$-fold, increased. We would have expected, if the atoms were spread out in space, only $N$ times as much as $1$, whereas we get $N^2$ times as much as $1$! That is to say, the scattering of water in lumps of $N$ molecules each is $N$ times more intense than the scattering of the single atoms. So as the water agglomerates the scattering increases. Does it increase ad infinitum? No! When does this analysis begin to fail? How many atoms can we put together before we cannot drive this argument any further? Answer: If the water drop gets so big that from one end to the other is a wavelength or so, then the atoms are no longer all in phase because they are too far apart. So as we keep increasing the size of the droplets we get more and more scattering, until such a time that a drop gets about the size of a wavelength, and then the scattering does not increase anywhere nearly as rapidly as the drop gets bigger. Furthermore, the blue disappears, because for long wavelengths the drops can be bigger, before this limit is reached, than they can be for short wavelengths. Although the short waves scatter more per atom than the long waves, there is a bigger enhancement for the red end of the spectrum than for the blue end when all the drops are bigger than the wavelength, so the color is shifted from the blue toward the red. Now we can make an experiment that demonstrates this. We can make particles that are very small at first, and then gradually grow in size. We use a solution of sodium thiosulfate (hypo) with sulfuric acid, which precipitates very fine grains of sulfur. As the sulfur precipitates, the grains first start very small, and the scattering is a little bluish. As it precipitates more it gets more intense, and then it will get whitish as the particles get bigger. In addition, the light which goes straight through will have the blue taken out. That is why the sunset is red, of course, because the light that comes through a lot of air, to the eye has had a lot of blue light scattered out, so it is yellow-red. Finally, there is one other important feature which really belongs in the next chapter, on polarization, but it is so interesting that we point it out now. This is that the electric field of the scattered light tends to vibrate in a particular direction. The electric field in the incoming light is oscillating in some way, and the driven oscillator goes in this same direction, and if we are situated about at right angles to the beam, we will see polarized light, that is to say, light in which the electric field is going only one way. In general, the atoms can vibrate in any direction at right angles to the beam, but if they are driven directly toward or away from us, we do not see it. So if the incoming light has an electric field which changes and oscillates in any direction, which we call unpolarized light, then the light which is coming out at $90^\circ$ to the beam vibrates in only one direction! (See Fig. 32–3.) There is a substance called polaroid which has the property that when light goes through it, only the piece of the electric field which is along one particular axis can get through. We can use this to test for polarization, and indeed we find the light scattered by the hypo solution to be strongly polarized.
1
33
Polarization
1
The electric vector of light
In this chapter we shall consider those phenomena which depend on the fact that the electric field that describes the light is a vector. In previous chapters we have not been concerned with the direction of oscillation of the electric field, except to note that the electric vector lies in a plane perpendicular to the direction of propagation. The particular direction in this plane has not concerned us. We now consider those phenomena whose central feature is the particular direction of oscillation of the electric field. In ideally monochromatic light, the electric field must oscillate at a definite frequency, but since the $x$-component and the $y$-component can oscillate independently at a definite frequency, we must first consider the resultant effect produced by superposing two independent oscillations at right angles to each other. What kind of electric field is made up of an $x$-component and a $y$-component which oscillate at the same frequency? If one adds to an $x$-vibration a certain amount of $y$-vibration at the same phase, the result is a vibration in a new direction in the $xy$-plane. Figure 33–1 illustrates the superposition of different amplitudes for the $x$-vibration and the $y$-vibration. But the resultants shown in Fig. 33–1 are not the only possibilities; in all of these cases we have assumed that the $x$-vibration and the $y$-vibration are in phase, but it does not have to be that way. It could be that the $x$-vibration and the $y$-vibration are out of phase. When the $x$-vibration and the $y$-vibration are not in phase, the electric field vector moves around in an ellipse, and we can illustrate this in a familiar way. If we hang a ball from a support by a long string, so that it can swing freely in a horizontal plane, it will execute sinusoidal oscillations. If we imagine horizontal $x$- and $y$-coordinates with their origin at the rest position of the ball, the ball can swing in either the $x$- or $y$-direction with the same pendulum frequency. By selecting the proper initial displacement and initial velocity, we can set the ball in oscillation along either the $x$-axis or the $y$-axis, or along any straight line in the $xy$-plane. These motions of the ball are analogous to the oscillations of the electric field vector illustrated in Fig. 33–1. In each instance, since the $x$-vibrations and the $y$-vibrations reach their maxima and minima at the same time, the $x$- and $y$-oscillations are in phase. But we know that the most general motion of the ball is motion in an ellipse, which corresponds to oscillations in which the $x$- and $y$-directions are not in the same phase. The superposition of $x$- and $y$-vibrations which are not in phase is illustrated in Fig. 33–2 for a variety of angles between the phase of the $x$-vibration and that of the $y$-vibration. The general result is that the electric vector moves around an ellipse. The motion in a straight line is a particular case corresponding to a phase difference of zero (or an integral multiple of $\pi$); motion in a circle corresponds to equal amplitudes with a phase difference of $90^\circ$ (or any odd integral multiple of $\pi/2$). In Fig. 33–2 we have labeled the electric field vectors in the $x$- and $y$-directions with complex numbers, which are a convenient representation in which to express the phase difference. Do not confuse the real and imaginary components of the complex electric vector in this notation with the $x$- and $y$-coordinates of the field. The $x$- and $y$-coordinates plotted in Fig. 33–1 and Fig. 33–2 are actual electric fields that we can measure. The real and imaginary components of a complex electric field vector are only a mathematical convenience and have no physical significance. Now for some terminology. Light is linearly polarized (sometimes called plane polarized) when the electric field oscillates on a straight line; Fig. 33–1 illustrates linear polarization. When the end of the electric field vector travels in an ellipse, the light is elliptically polarized. When the end of the electric field vector travels around a circle, we have circular polarization. If the end of the electric vector, when we look at it as the light comes straight toward us, goes around in a counterclockwise direction, we call it right-hand circular polarization. Figure 33–2(g) illustrates right-hand circular polarization, and Fig. 33–2(c) shows left-hand circular polarization. In both cases the light is coming out of the paper. Our convention for labeling left-hand and right-hand circular polarization is consistent with that which is used today for all the other particles in physics which exhibit polarization (e.g., electrons). However, in some books on optics the opposite conventions are used, so one must be careful. We have considered linearly, circularly, and elliptically polarized light, which covers everything except for the case of unpolarized light. Now how can the light be unpolarized when we know that it must vibrate in one or another of these ellipses? If the light is not absolutely monochromatic, or if the $x$- and $y$-phases are not kept perfectly together, so that the electric vector first vibrates in one direction, then in another, the polarization is constantly changing. Remember that one atom emits during $10^{-8}$ sec, and if one atom emits a certain polarization, and then another atom emits light with a different polarization, the polarizations will change every $10^{-8}$ sec. If the polarization changes more rapidly than we can detect it, then we call the light unpolarized, because all the effects of the polarization average out. None of the interference effects of polarization would show up with unpolarized light. But as we see from the definition, light is unpolarized only if we are unable to find out whether the light is polarized or not.
1
33
Polarization
2
Polarization of scattered light
The first example of the polarization effect that we have already discussed is the scattering of light. Consider a beam of light, for example from the sun, shining on the air. The electric field will produce oscillations of charges in the air, and motion of these charges will radiate light with its maximum intensity in a plane normal to the direction of vibration of the charges. The beam from the sun is unpolarized, so the direction of polarization changes constantly, and the direction of vibration of the charges in the air changes constantly. If we consider light scattered at $90^\circ$, the vibration of the charged particles radiates to the observer only when the vibration is perpendicular to the observer’s line of sight, and then light will be polarized along the direction of vibration. So scattering is an example of one means of producing polarization.
1
33
Polarization
3
Birefringence
Another interesting effect of polarization is the fact that there are substances for which the index of refraction is different for light linearly polarized in one direction and linearly polarized in another. Suppose that we had some material which consisted of long, nonspherical molecules, longer than they are wide, and suppose that these molecules were arranged in the substance with their long axes parallel. Then what happens when the oscillating electric field passes through this substance? Suppose that because of the structure of the molecule, the electrons in the substance respond more easily to oscillations in the direction parallel to the axes of the molecules than they would respond if the electric field tries to push them at right angles to the molecular axis. In this way we expect a different response for polarization in one direction than for polarization at right angles to that direction. Let us call the direction of the axes of the molecules the optic axis. When the polarization is in the direction of the optic axis the index of refraction is different than it would be if the direction of polarization were at right angles to it. Such a substance is called birefringent. It has two refrangibilities, i.e., two indexes of refraction, depending on the direction of the polarization inside the substance. What kind of a substance can be birefringent? In a birefringent substance there must be a certain amount of lining up, for one reason or another, of unsymmetrical molecules. Certainly a cubic crystal, which has the symmetry of a cube, cannot be birefringent. But long needlelike crystals undoubtedly contain molecules that are asymmetric, and one observes this effect very easily. Let us see what effects we would expect if we were to shine polarized light through a plate of a birefringent substance. If the polarization is parallel to the optic axis, the light will go through with one velocity; if the polarization is perpendicular to the axis, the light is transmitted with a different velocity. An interesting situation arises when, say, light is linearly polarized at $45^\circ$ to the optic axis. Now the $45^\circ$ polarization, we have already noticed, can be represented as a superposition of the $x$- and the $y$-polarizations of equal amplitude and in phase, as shown in Fig. 33–2(a). Since the $x$- and $y$-polarizations travel with different velocities, their phases change at a different rate as the light passes through the substance. So, although at the start the $x$- and $y$-vibrations are in phase, inside the material the phase difference between $x$- and $y$-vibrations is proportional to the depth in the substance. As the light proceeds through the material the polarization changes as shown in the series of diagrams in Fig. 33–2. If the thickness of the plate is just right to introduce a $90^\circ$ phase shift between the $x$- and $y$-polarizations, as in Fig. 33–2(c), the light will come out circularly polarized. Such a thickness is called a quarter-wave plate, because it introduces a quarter-cycle phase difference between the $x$- and the $y$-polarizations. If linearly polarized light is sent through two quarter-wave plates, it will come out plane-polarized again, but at right angles to the original direction, as we can see from Fig. 33–2(e). One can easily illustrate this phenomenon with a piece of cellophane. Cellophane is made of long, fibrous molecules, and is not isotropic, since the fibers lie preferentially in a certain direction. To demonstrate birefringence we need a beam of linearly polarized light, and we can obtain this conveniently by passing unpolarized light through a sheet of polaroid. Polaroid, which we will discuss later in more detail, has the useful property that it transmits light that is linearly polarized parallel to the axis of the polaroid with very little absorption, but light polarized in a direction perpendicular to the axis of the polaroid is strongly absorbed. When we pass unpolarized light through a sheet of polaroid, only that part of the unpolarized beam which is vibrating parallel to the axis of the polaroid gets through, so that the transmitted beam is linearly polarized. This same property of polaroid is also useful in detecting the direction of polarization of a linearly polarized beam, or in determining whether a beam is linearly polarized or not. One simply passes the beam of light through the polaroid sheet and rotates the polaroid in the plane normal to the beam. If the beam is linearly polarized, it will not be transmitted through the sheet when the axis of the polaroid is normal to the direction of polarization. The transmitted beam is only slightly attenuated when the axis of the polaroid sheet is rotated through $90^\circ$. If the transmitted intensity is independent of the orientation of the polaroid, the beam is not linearly polarized. To demonstrate the birefringence of cellophane, we use two sheets of polaroid, as shown in Fig. 33–3. The first gives us a linearly polarized beam which we pass through the cellophane and then through the second polaroid sheet, which serves to detect any effect the cellophane may have had on the polarized light passing through it. If we first set the axes of the two polaroid sheets perpendicular to each other and remove the cellophane, no light will be transmitted through the second polaroid. If we now introduce the cellophane between the two polaroid sheets, and rotate the sheet about the beam axis, we observe that in general the cellophane makes it possible for some light to pass through the second polaroid. However, there are two orientations of the cellophane sheet, at right angles to each other, which permit no light to pass through the second polaroid. These orientations in which linearly polarized light is transmitted through the cellophane with no effect on the direction of polarization must be the directions parallel and perpendicular to the optic axis of the cellophane sheet. We suppose that the light passes through the cellophane with two different velocities in these two different orientations, but it is transmitted without changing the direction of polarization. When the cellophane is turned halfway between these two orientations, as shown in Fig. 33–3, we see that the light transmitted through the second polaroid is bright. It just happens that ordinary cellophane used in commercial packaging is very close to a half-wave thickness for most of the colors in white light. Such a sheet will turn the axis of linearly polarized light through $90^\circ$ if the incident linearly polarized beam makes an angle of $45^\circ$ with the optic axis, so that the beam emerging from the cellophane is then vibrating in the right direction to pass through the second polaroid sheet. If we use white light in our demonstration, the cellophane sheet will be of the proper half-wave thickness only for a particular component of the white light, and the transmitted beam will have the color of this component. The color transmitted depends on the thickness of the cellophane sheet, and we can vary the effective thickness of the cellophane by tilting it so that the light passes through the cellophane at an angle, consequently through a longer path in the cellophane. As the sheet is tilted the transmitted color changes. With cellophane of different thicknesses one can construct filters that will transmit different colors. These filters have the interesting property that they transmit one color when the two polaroid sheets have their axes perpendicular, and the complementary color when the axes of the two polaroid sheets are parallel. Another interesting application of aligned molecules is quite practical. Certain plastics are composed of very long and complicated molecules all twisted together. When the plastic is solidified very carefully, the molecules are all twisted in a mass, so that there are as many aligned in one direction as another, and so the plastic is not particularly birefringent. Usually there are strains and stresses introduced when the material is solidified, so the material is not perfectly homogeneous. However, if we apply tension to a piece of this plastic material, it is as if we were pulling a whole tangle of strings, and there will be more strings preferentially aligned parallel to the tension than in any other direction. So when a stress is applied to certain plastics, they become birefringent, and one can see the effects of the birefringence by passing polarized light through the plastic. If we examine the transmitted light through a polaroid sheet, patterns of light and dark fringes will be observed (in color, if white light is used). The patterns move as stress is applied to the sample, and by counting the fringes and seeing where most of them are, one can determine what the stress is. Engineers use this phenomenon as a means of finding the stresses in odd-shaped pieces that are difficult to calculate. Another interesting example of a way of obtaining birefringence is by means of a liquid substance. Consider a liquid composed of long asymmetric molecules which carry a plus or minus average charge near the ends of the molecule, so that the molecule is an electric dipole. In the collisions in the liquid the molecules will ordinarily be randomly oriented, with as many molecules pointed in one direction as in another. If we apply an electric field the molecules will tend to line up, and the moment they line up the liquid becomes birefringent. With two polaroid sheets and a transparent cell containing such a polar liquid, we can devise an arrangement with the property that light is transmitted only when the electric field is applied. So we have an electrical switch for light, which is called a Kerr cell. This effect, that an electric field can produce birefringence in certain liquids, is called the Kerr effect.
1
33
Polarization
4
Polarizers
So far we have considered substances in which the refractive index is different for light polarized in different directions. Of very practical value are those crystals and other substances in which not only the index, but also the coefficient of absorption, is different for light polarized in different directions. By the same arguments which supported the idea of birefringence, it is understandable that absorption can vary with the direction in which the charges are forced to vibrate in an anisotropic substance. Tourmaline is an old, famous example and polaroid is another. Polaroid consists of a thin layer of small crystals of herapathite (a salt of iodine and quinine), all aligned with their axes parallel. These crystals absorb light when the oscillations are in one direction, and they do not absorb appreciably when the oscillations are in the other direction. Suppose that we send light into a polaroid sheet polarized linearly at an angle $\theta$ to the passing direction. What intensity will come through? This incident light can be resolved into a component perpendicular to the pass direction which is proportional to $\sin\theta$, and a component along the pass direction which is proportional to $\cos\theta$. The amplitude which comes out of the polaroid is only the cosine $\theta$ part; the $\sin\theta$ component is absorbed. The amplitude which passes through the polaroid is smaller than the amplitude which entered, by a factor $\cos\theta$. The energy which passes through the polaroid, i.e., the intensity of the light, is proportional to the square of $\cos\theta$. $\operatorname{Cos}^2\theta$, then, is the intensity transmitted when the light enters polarized at an angle $\theta$ to the pass direction. The absorbed intensity, of course, is $\sin^2\theta$. An interesting paradox is presented by the following situation. We know that it is not possible to send a beam of light through two polaroid sheets with their axes crossed at right angles. But if we place a third polaroid sheet between the first two, with its pass axis at $45^\circ$ to the crossed axes, some light is transmitted. We know that polaroid absorbs light, it does not create anything. Nevertheless, the addition of a third polaroid at $45^\circ$ allows more light to get through. The analysis of this phenomenon is left as an exercise for the student. One of the most interesting examples of polarization is not in complicated crystals or difficult substances, but in one of the simplest and most familiar of situations—the reflection of light from a surface. Believe it or not, when light is reflected from a glass surface it may be polarized, and the physical explanation of this is very simple. It was discovered empirically by Brewster that light reflected from a surface is completely polarized if the reflected beam and the beam refracted into the material form a right angle. The situation is illustrated in Fig. 33–4. If the incident beam is polarized in the plane of incidence, there will be no reflection at all. Only if the incident beam is polarized normal to the plane of incidence will it be reflected. The reason is very easy to understand. In the reflecting material the light is polarized transversely, and we know that it is the motion of the charges in the material which generates the emergent beam, which we call the reflected beam. The source of this so-called reflected light is not simply that the incident beam is reflected; our deeper understanding of this phenomenon tells us that the incident beam drives an oscillation of the charges in the material, which in turn generates the reflected beam. From Fig. 33–4 it is clear that only oscillations normal to the paper can radiate in the direction of reflection, and consequently the reflected beam will be polarized normal to the plane of incidence. If the incident beam is polarized in the plane of incidence, there will be no reflected light. This phenomenon is readily demonstrated by reflecting a linearly polarized beam from a flat piece of glass. If the glass is turned to present different angles of incidence to the polarized beam, sharp attenuation of the reflected intensity is observed when the angle of incidence passes through Brewster’s angle. This attenuation is observed only if the plane of polarization lies in the plane of incidence. If the plane of polarization is normal to the plane of incidence, the usual reflected intensity is observed at all angles.
1
33
Polarization
5
Optical activity
Another most remarkable effect of polarization is observed in materials composed of molecules which do not have reflection symmetry: molecules shaped something like a corkscrew, or like a gloved hand, or any shape which, if viewed through a mirror, would be reversed in the same way that a left-hand glove reflects as a right-hand glove. Suppose all of the molecules in the substance are the same, i.e., none is a mirror image of any other. Such a substance may show an interesting effect called optical activity, whereby as linearly polarized light passes through the substance, the direction of polarization rotates about the beam axis. To understand the phenomenon of optical activity requires some calculation, but we can see qualitatively how the effect might come about, without actually carrying out the calculations. Consider an asymmetric molecule in the shape of a spiral, as shown in Fig. 33–5. Molecules need not actually be shaped like a corkscrew in order to exhibit optical activity, but this is a simple shape which we shall take as a typical example of those that do not have reflection symmetry. When a light beam linearly polarized along the $y$-direction falls on this molecule, the electric field will drive charges up and down the helix, thereby generating a current in the $y$-direction and radiating an electric field $E_y$ polarized in the $y$-direction. However, if the electrons are constrained to move along the spiral, they must also move in the $x$-direction as they are driven up and down. When a current is flowing up the spiral, it is also flowing into the paper at $z = z_1$ and out of the paper at $z = z_1 + A$, if $A$ is the diameter of our molecular spiral. One might suppose that the current in the $x$-direction would produce no net radiation, since the currents are in opposite directions on opposite sides of the spiral. However, if we consider the $x$-components of the electric field arriving at $z = z_2$, we see that the field radiated by the current at $z = z_1 + A$ and the field radiated from $z = z_1$ arrive at $z_2$ separated in time by the amount $A/c$, and thus separated in phase by $\pi + \omega A/c$. Since the phase difference is not exactly $\pi$, the two fields do not cancel exactly, and we are left with a small $x$-component in the electric field generated by the motion of the electrons in the molecule, whereas the driving electric field had only a $y$-component. This small $x$-component, added to the large $y$-component, produces a resultant field that is tilted slightly with respect to the $y$-axis, the original direction of polarization. As the light moves through the material, the direction of polarization rotates about the beam axis. By drawing a few examples and considering the currents that will be set in motion by an incident electric field, one can convince himself that the existence of optical activity and the sign of the rotation are independent of the orientation of the molecules. Corn syrup is a common substance which possesses optical activity. The phenomenon is easily demonstrated with a polaroid sheet to produce a linearly polarized beam, a transmission cell containing corn syrup, and a second polaroid sheet to detect the rotation of the direction of polarization as the light passes through the corn syrup.
1
33
Polarization
6
The intensity of reflected light
Let us now consider quantitatively the reflection coefficient as a function of angle. Figure 33–6(a) shows a beam of light striking a glass surface, where it is partly reflected and partly refracted into the glass. Let us suppose that the incident beam, of unit amplitude, is linearly polarized normal to the plane of the paper. We will call the amplitude of the reflected wave $b$, and the amplitude of the refracted wave $a$. The refracted and reflected waves will, of course, be linearly polarized, and the electric field vectors of the incident, reflected, and refracted waves are all parallel to each other. Figure 33–6(b) shows the same situation, but now we suppose that the incident wave, of unit amplitude, is polarized in the plane of the paper. Now let us call the amplitude of the reflected and refracted wave $B$ and $A$, respectively. We wish to calculate how strong the reflection is in the two situations illustrated in Fig. 33–6(a) and 33–6(b). We already know that when the angle between the reflected beam and refracted beam is a right angle, there will be no reflected wave in Fig. 33–6(b), but let us see if we cannot get a quantitative answer—an exact formula for $B$ and $b$ as a function of the angle of incidence, $i$. The principle that we must understand is as follows. The currents that are generated in the glass produce two waves. First, they produce the reflected wave. Moreover, we know that if there were no currents generated in the glass, the incident wave would continue straight into the glass. Remember that all the sources in the world make the net field. The source of the incident light beam produces a field of unit amplitude, which would move into the glass along the dotted line in the figure. This field is not observed, and therefore the currents generated in the glass must produce a field of amplitude $-1$, which moves along the dotted line. Using this fact, we will calculate the amplitude of the refracted waves, $a$ and $A$. In Fig. 33–6(a) we see that the field of amplitude $b$ is radiated by the motion of charges inside the glass which are responding to a field $a$ inside the glass, and that therefore $b$ is proportional to $a$. We might suppose that since our two figures are exactly the same, except for the direction of polarization, the ratio $B/A$ would be the same as the ratio $b/a$. This is not quite true, however, because in Fig. 33–6(b) the polarization directions are not all parallel to each other, as they are in Fig. 33–6(a). It is only the component of the electric field in the glass which is perpendicular to $B$, $A \cos\,(i + r)$, which is effective in producing $B$. The correct expression for the proportionality is then \begin{equation} \label{Eq:I:33:1} \frac{b}{a}=\frac{B}{A \cos\,(i + r)}. \end{equation} Now we use a trick. We know that in both (a) and (b) of Fig. 33–6 the electric field in the glass must produce oscillations of the charges, which generate a field of amplitude $-1$, polarized parallel to the incident beam, and moving in the direction of the dotted line. But we see from part (b) of the figure that only the component of the electric field in the glass that is normal to the dashed line has the right polarization to produce this field, whereas in Fig. 33–6(a) the full amplitude $a$ is effective, since the polarization of wave $a$ is parallel to the polarization of the wave of amplitude $-1$. Therefore we can write \begin{equation} \label{Eq:I:33:2} \frac{A \cos\,(i - r)}{a}=\frac{-1}{-1}, \end{equation} since the two amplitudes on the left side of Eq. (33.2) each produce the wave of amplitude $-1$. Dividing Eq. (33.1) by Eq. (33.2), we obtain \begin{equation} \label{Eq:I:33:3} \frac{B}{b}=\frac{\cos\,(i+r)}{\cos\,(i-r)}, \end{equation} a result which we can check against what we already know. If we set $(i + r) = 90^\circ$, Eq. (33.3) gives $B = 0$, as Brewster says it should be, so our results so far are at least not obviously wrong. We have assumed unit amplitudes for the incident waves, so that $\abs{B}^2/1^2$ is the reflection coefficient for waves polarized in the plane of incidence, and $\abs{b}^2/1^2$ is the reflection coefficient for waves polarized normal to the plane of incidence. The ratio of these two reflection coefficients is determined by Eq. (33.3). Now we perform a miracle, and compute not just the ratio, but each coefficient $\abs{B}^2$ and $\abs{b}^2$ individually! We know from the conservation of energy that the energy in the refracted wave must be equal to the incident energy minus the energy in the reflected wave, $1-\abs{B}^2$ in one case, $1-\abs{b}^2$ in the other. Furthermore, the energy which passes into the glass in Fig. 33–6(b) is to the energy which passes into the glass in Fig. 33–6(a) as the ratio of the squares of the refracted amplitudes, $\abs{A}^2/\abs{a}^2$. One might ask whether we really know how to compute the energy inside the glass, because, after all, there are energies of motion of the atoms in addition to the energy in the electric field. But it is obvious that all of the various contributions to the total energy will be proportional to the square of the amplitude of the electric field. Therefore we can write \begin{equation} \label{Eq:I:33:4} \frac{1-\abs{B}^2}{1-\abs{b}^2}= \frac{\abs{A}^2}{\abs{a}^2}. \end{equation} We now substitute Eq. (33.2) to eliminate $A/a$ from the expression above, and express $B$ in terms of $b$ by means of Eq. (33.3): \begin{equation} \label{Eq:I:33:5} \frac{1-\abs{b}^2\,\dfrac{\cos^2\,(i+r)}{\cos^2\,(i-r)}} {1-\abs{b}^2}=\frac{1}{\cos^2\,(i-r)}. \end{equation} This equation contains only one unknown amplitude, $b$. Solving for $\abs{b}^2$, we obtain \begin{equation} \label{Eq:I:33:6} \abs{b}^2=\frac{\sin^2\,(i-r)}{\sin^2\,(i+r)} \end{equation} and, with the aid of (33.3), \begin{equation} \label{Eq:I:33:7} \abs{B}^2=\frac{\tan^2\,(i-r)}{\tan^2\,(i+r)}. \end{equation} So we have found the reflection coefficient $\abs{b}^2$ for an incident wave polarized perpendicular to the plane of incidence, and also the reflection coefficient $\abs{B}^2$ for an incident wave polarized in the plane of incidence! It is possible to go on with arguments of this nature and deduce that $b$ is real. To prove this, one must consider a case where light is coming from both sides of the glass surface at the same time, a situation not easy to arrange experimentally, but fun to analyze theoretically. If we analyze this general case, we can prove that $b$ must be real, and therefore, in fact, that $b = \pm\sin\,(i - r)/\sin\,(i + r)$. It is even possible to determine the sign by considering the case of a very, very thin layer in which there is reflection from the front and from the back surfaces, and calculating how much light is reflected. We know how much light should be reflected by a thin layer, because we know how much current is generated, and we have even worked out the fields produced by such currents. One can show by these arguments that \begin{equation} \label{Eq:I:33:8} b=-\frac{\sin\,(i-r)}{\sin\,(i+r)},\quad B=-\frac{\tan\,(i-r)}{\tan\,(i+r)}. \end{equation} These expressions for the reflection coefficients as a function of the angles of incidence and refraction are called Fresnel’s reflection formulas. If we consider the limit as the angles $i$ and $r$ go to zero, we find, for the case of normal incidence, that $B^2\approx b^2\approx(i-r)^2/(i+r)^2$ for both polarizations, since the sines are practically equal to the angles, as are also the tangents. But we know that $\sin i/\sin r = n$, and when the angles are small, $i/r \approx n$. It is thus easy to show that the coefficient of reflection for normal incidence is \begin{equation*} B^2=b^2=\frac{(n-1)^2}{(n+1)^2}. \end{equation*} It is interesting to find out how much light is reflected at normal incidence from the surface of water, for example. For water, $n$ is $4/3$, so that the reflection coefficient is $(1/7)^2 \approx 2\%$. At normal incidence, only two percent of the light is reflected from the surface of water.
1
33
Polarization
7
Anomalous refraction
The last polarization effect we shall consider was actually one of the first to be discovered: anomalous refraction. Sailors visiting Iceland brought back to Europe crystals of Iceland spar (CaCO$_3$) which had the amusing property of making anything seen through the crystal appear doubled, i.e., as two images. This came to the attention of Huygens, and played an important role in the discovery of polarization. As is often the case, the phenomena which are discovered first are the hardest, ultimately, to explain. It is only after we understand a physical concept thoroughly that we can carefully select those phenomena which most clearly and simply demonstrate the concept. Anomalous refraction is a particular case of the same birefringence that we considered earlier. Anomalous refraction comes about when the optic axis, the long axis of our asymmetric molecules, is not parallel to the surface of the crystal. In Fig. 33–7 are drawn two pieces of birefringent material, with the optic axis as shown. In part (a) of the figure, the incident beam falling on the material is linearly polarized in a direction perpendicular to the optic axis of the material. When this beam strikes the surface of the material, each point on the surface acts as a source of a wave which travels into the crystal with velocity $v_\perp$, the velocity of light in the crystal when the plane of polarization is normal to the optic axis. The wavefront is just the envelope or locus of all these little spherical waves, and this wavefront moves straight through the crystal and out the other side. This is just the ordinary behavior we would expect, and this ray is called the ordinary ray. In part (b) of the figure the linearly polarized light falling on the crystal has its direction of polarization turned through $90^\circ$, so that the optic axis lies in the plane of polarization. When we now consider the little waves originating at any point on the surface of the crystal, we see that they do not spread out as spherical waves. Light travelling along the optic axis travels with velocity $v_\perp$ because the polarization is perpendicular to the optic axis, whereas the light travelling perpendicular to the optic axis travels with velocity $v_\parallel$ because the polarization is parallel to the optic axis. In a birefringent material $v_\parallel\neq v_\perp$, and in the figure $v_\parallel < v_\perp$. A more complete analysis will show that the waves spread out on the surface of an ellipsoid, with the optic axis as major axis of the ellipsoid. The envelope of all these elliptical waves is the wavefront which proceeds through the crystal in the direction shown. Again, at the back surface the beam will be deflected just as it was at the front surface, so that the light emerges parallel to the incident beam, but displaced from it. Clearly, this beam does not follow Snell’s law, but goes in an extraordinary direction. It is therefore called the extraordinary ray. When an unpolarized beam strikes an anomalously refracting crystal, it is separated into an ordinary ray, which travels straight through in the normal manner, and an extraordinary ray which is displaced as it passes through the crystal. These two emergent rays are linearly polarized at right angles to each other. That this is true can be readily demonstrated with a sheet of polaroid to analyze the polarization of the emergent rays. We can also demonstrate that our interpretation of this phenomenon is correct by sending linearly polarized light into the crystal. By properly orienting the direction of polarization of the incident beam, we can make this light go straight through without splitting, or we can make it go through without splitting but with a displacement. We have represented all the various polarization cases in Figs. 33–1 and 33–2 as superpositions of two special polarization cases, namely $x$ and $y$ in various amounts and phases. Other pairs could equally well have been used. Polarization along any two perpendicular axes $x'$, $y'$ inclined to $x$ and $y$ would serve as well [for example, any polarization can be made up of superpositions of cases (a) and (e) of Fig. 33–2]. It is interesting, however, that this idea can be extended to other cases also. For example, any linear polarization can be made up by superposing suitable amounts at suitable phases of right and left circular polarizations [cases (c) and (g) of Fig. 33–2], since two equal vectors rotating in opposite directions add to give a single vector oscillating in a straight line (Fig. 33–8). If the phase of one is shifted relative to the other, the line is inclined. Thus all the pictures of Fig. 33–1 could be labeled “the superposition of equal amounts of right and left circularly polarized light at various relative phases.” As the left slips behind the right in phase, the direction of the linear polarization changes. Therefore optically active materials are, in a sense, birefringent. Their properties can be described by saying that they have different indexes for right- and left-hand circularly polarized light. Superposition of right and left circularly polarized light of different intensities produces elliptically polarized light. Circularly polarized light has another interesting property—it carries angular momentum (about the direction of propagation). To illustrate this, suppose that such light falls on an atom represented by a harmonic oscillator that can be displaced equally well in any direction in the plane $xy$. Then the $x$-displacement of the electron will respond to the $E_x$ component of the field, while the $y$-component responds, equally, to the equal $E_y$ component of the field but $90^\circ$ behind in phase. That is, the responding electron goes around in a circle, with angular velocity $\omega$, in response to the rotating electric field of the light (Fig. 33–9). Depending on the damping characteristics of the response of the oscillator, the direction of the displacement $\FLPa$ of the electron, and the direction of the force $q_e\FLPE$ on it need not be the same but they rotate around together. The $\FLPE$ may have a component at right angles to $\FLPa$, so work is done on the system and a torque $\tau$ is exerted. The work done per second is $\tau\omega$. Over a period of time $T$ the energy absorbed is $\tau\omega T$, while $\tau T$ is the angular momentum delivered to the matter absorbing the energy. We see therefore that a beam of right circularly polarized light containing a total energy $\energy$ carries an angular momentum (with vector directed along the direction of propagation) $\energy/\omega$. For when this beam is absorbed that angular momentum is delivered to the absorber. Left-hand circular light carries angular momentum of the opposite sign, $-\energy/\omega$.
1
34
Relativistic Effects in Radiation
1
Moving sources
In the present chapter we shall describe a number of miscellaneous effects in connection with radiation, and then we shall be finished with the classical theory of light propagation. In our analysis of light, we have gone rather far and into considerable detail. The only phenomena of any consequence associated with electromagnetic radiation that we have not discussed is what happens if radiowaves are contained in a box with reflecting walls, the size of the box being comparable to a wavelength, or are transmitted down a long tube. The phenomena of so-called cavity resonators and waveguides we shall discuss later; we shall first use another physical example—sound—and then we shall return to this subject. Except for this, the present chapter is our last consideration of the classical theory of light. We can summarize all the effects that we shall now discuss by remarking that they have to do with the effects of moving sources. We no longer assume that the source is localized, with all its motion being at a relatively low speed near a fixed point. We recall that the fundamental laws of electrodynamics say that, at large distances from a moving charge, the electric field is given by the formula \begin{equation} \label{Eq:I:34:1} \FLPE=-\frac{q}{4\pi\epsO c^2}\, \frac{d^2\FLPe_{R'}}{dt^2}. \end{equation} The second derivative of the unit vector $\FLPe_{R'}$ which points in the apparent direction of the charge, is the determining feature of the electric field. This unit vector does not point toward the present position of the charge, of course, but rather in the direction that the charge would seem to be, if the information travels only at the finite speed $c$ from the charge to the observer. Associated with the electric field is a magnetic field, always at right angles to the electric field and at right angles to the apparent direction of the source, given by the formula \begin{equation} \label{Eq:I:34:2} \FLPB=-\FLPe_{R'}\times\FLPE/c. \end{equation} Until now we have considered only the case in which motions are nonrelativistic in speed, so that there is no appreciable motion in the direction of the source to be considered. Now we shall be more general and study the case where the motion is at an arbitrary velocity, and see what different effects may be expected in those circumstances. We shall let the motion be at an arbitrary speed, but of course we shall still assume that the detector is very far from the source. We already know from our discussion in Chapter 28 that the only things that count in $d^2\FLPe_{R'}/dt^2$ are the changes in the direction of $\FLPe_{R'}$. Let the coordinates of the charge be $(x,y,z)$, with $z$ measured along the direction of observation (Fig. 34–1). At a given moment in time, say the moment $\tau$, the three components of the position are $x(\tau)$, $y(\tau)$, and $z(\tau)$. The distance $R$ is very nearly equal to $R(\tau) = R_0 + z(\tau)$. Now the direction of the vector $\FLPe_{R'}$ depends mainly on $x$ and $y$, but hardly at all upon $z$: the transverse components of the unit vector are $x/R$ and $y/R$, and when we differentiate these components we get things like $R^2$ in the denominator: \begin{equation*} \ddt{(x/R)}{t}=\frac{dx/dt}{R}-\ddt{z}{t}\,\frac{x}{R^2}. \end{equation*} So, when we are far enough away the only terms we have to worry about are the variations of $x$ and $y$. Thus we take out the factor $R_0$ and get \begin{align} E_x&=-\frac{q}{4\pi\epsO c^2R_0}\,\frac{d^2x'}{dt^2},\notag\\[1ex] \label{Eq:I:34:3} E_y&=-\frac{q}{4\pi\epsO c^2R_0}\,\frac{d^2y'}{dt^2}, \end{align} where $R_0$ is the distance, more or less, to $q$; let us take it as the distance $OP$ to the origin of the coordinates $(x,y,z)$. Thus the electric field is a constant multiplied by a very simple thing, the second derivatives of the $x$- and $y$-coordinates. (We could put it more mathematically by calling $x$ and $y$ the transverse components of the position vector $\FLPr$ of the charge, but this would not add to the clarity.) Of course, we realize that the coordinates must be measured at the retarded time. Here we find that $z(\tau)$ does affect the retardation. What time is the retarded time? If the time of observation is called $t$ (the time at $P$) then the time $\tau$ to which this corresponds at $A$ is not the time $t$, but is delayed by the total distance that the light has to go, divided by the speed of light. In the first approximation, this delay is $R_0/c$, a constant (an uninteresting feature), but in the next approximation we must include the effects of the position in the $z$-direction at the time $\tau$, because if $q$ is a little farther back, there is a little more retardation. This is an effect that we have neglected before, and it is the only change needed in order to make our results valid for all speeds. What we must now do is to choose a certain value of $t$ and calculate the value of $\tau$ from it, and thus find out where $x$ and $y$ are at that $\tau$. These are then the retarded $x$ and $y$, which we call $x'$ and $y'$, whose second derivatives determine the field. Thus $\tau$ is determined by \begin{equation} t=\tau+\frac{R_0}{c}+\frac{z(\tau)}{c}\notag \end{equation} and \begin{equation} \label{Eq:I:34:4} x'(t)=x(\tau),\quad y'(t)=y(\tau). \end{equation} Now these are complicated equations, but it is easy enough to make a geometrical picture to describe their solution. This picture will give us a good qualitative feeling for how things work, but it still takes a lot of detailed mathematics to deduce the precise results of a complicated problem.
1
34
Relativistic Effects in Radiation
2
Finding the “apparent” motion
The above equation has an interesting simplification. If we disregard the uninteresting constant delay $R_0/c$, which just means that we must change the origin of $t$ by a constant, then it says that \begin{equation} \label{Eq:I:34:5} ct=c\tau+z(\tau),\quad x'=x(\tau),\quad y'=y(\tau). \end{equation} Now we need to find $x'$ and $y'$ as functions of $t$, not $\tau$, and we can do this in the following way: Eq. (34.5) says that we should take the actual motion and add a constant (the speed of light) times $\tau$. What that turns out to mean is shown in Fig. 34–2. We take the actual motion of the charge (shown at left) and imagine that as it is going around it is being swept away from the point $P$ at the speed $c$ (there are no contractions from relativity or anything like that; this is just a mathematical addition of the $c\tau$). In this way we get a new motion, in which the line-of-sight coordinate is $ct$, as shown at the right. (The figure shows the result for a rather complicated motion in a plane, but of course the motion may not be in one plane—it may be even more complicated than motion in a plane.) The point is that the horizontal (i.e., line-of-sight) distance now is no longer the old $z$, but is $z + c\tau$, and therefore is $ct$. Thus we have found a picture of the curve, $x'$ (and $y'$) against $t$! All we have to do to find the field is to look at the acceleration of this curve, i.e., to differentiate it twice. So the final answer is: in order to find the electric field for a moving charge, take the motion of the charge and translate it back at the speed $c$ to “open it out”; then the curve, so drawn, is a curve of the $x'$ and $y'$ positions of the function of $t$. The acceleration of this curve gives the electric field as a function of $t$. Or, if we wish, we can now imagine that this whole “rigid” curve moves forward at the speed $c$ through the plane of sight, so that the point of intersection with the plane of sight has the coordinates $x'$ and $y'$. The acceleration of this point makes the electric field. This solution is just as exact as the formula we started with—it is simply a geometrical representation. If the motion is relatively slow, for instance if we have an oscillator just going up and down slowly, then when we shoot that motion away at the speed of light, we would get, of course, a simple cosine curve, and that gives a formula we have been looking at for a long time: it gives the field produced by an oscillating charge. A more interesting example is an electron moving rapidly, very nearly at the speed of light, in a circle. If we look in the plane of the circle, the retarded $x'(t)$ appears as shown in Fig. 34–3. What is this curve? If we imagine a radius vector from the center of the circle to the charge, and if we extend this radial line a little bit past the charge, just a shade if it is going fast, then we come to a point on the line that goes at the speed of light. Therefore, when we translate the motion back at the speed of light, that corresponds to having a wheel with a charge on it rolling backward (without slipping) at the speed $c$; thus we find a curve which is very close to a cycloid—it is called a curtate cycloid. If the charge is going very nearly at the speed of light, the “cusps” are very sharp indeed; if it went at exactly the speed of light, they would be actual cusps, infinitely sharp. “Infinitely sharp” is interesting; it means that near a cusp the second derivative is enormous. Once in each cycle we get a sharp pulse of electric field. This is not at all what we would get from a nonrelativistic motion, where each time the charge goes around there is an oscillation which is of about the same “strength” all the time. Instead, there are very sharp pulses of electric field spaced at time intervals $T_0$ apart, where $T_0$ is the period of revolution. These strong electric fields are emitted in a narrow cone in the direction of motion of the charge. When the charge is moving away from $P$, there is very little curvature and there is very little radiated field in the direction of $P$.
1
34
Relativistic Effects in Radiation
3
Synchrotron radiation
We have very fast electrons moving in circular paths in the synchrotron; they are travelling at very nearly the speed $c$, and it is possible to see the above radiation as actual light! Let us discuss this in more detail. In the synchrotron we have electrons which go around in circles in a uniform magnetic field. First, let us see why they go in circles. From Eq. (28.2), we know that the force on a particle in a magnetic field is given by \begin{equation} \label{Eq:I:34:6} \FLPF=q\FLPv\times\FLPB, \end{equation} and it is at right angles both to the field and to the velocity. As usual, the force is equal to the rate of change of momentum with time. If the field is directed upward out of the paper, the momentum of the particle and the force on it are as shown in Fig. 34–4. Since the force is at right angles to the velocity, the kinetic energy, and therefore the speed, remains constant. All the magnetic field does is to change the direction of motion. In a short time $\Delta t$, the momentum vector changes at right angles to itself by an amount $\Delta\FLPp = \FLPF\,\Delta t$, and therefore $\FLPp$ turns through an angle $\Delta\theta =$ $\Delta p/p =$ $qvB\,\Delta t/p$, since $\abs{\FLPF}=qvB$. But in this same time the particle has gone a distance $\Delta s = v\,\Delta t$. Evidently, the two lines $AB$ and $CD$ will intersect at a point $O$ such that $OA =$ $OC =$ $R$, where $\Delta s = R\,\Delta\theta$. Combining this with the previous expressions, we find $R\,\Delta\theta/\Delta t =$ $R\omega =$ $v =$ $qvBR/p$, from which we find \begin{equation} \label{Eq:I:34:7} p=qBR \end{equation} and \begin{equation} \label{Eq:I:34:8} \omega=qvB/p. \end{equation} Since this same argument can be applied during the next instant, the next, and so on, we conclude that the particle must be moving in a circle of radius $R$, with angular velocity $\omega$. The result that the momentum of the particle is equal to a charge times the radius times the magnetic field is a very important law that is used a great deal. It is important for practical purposes because if we have elementary particles which all have the same charge and we observe them in a magnetic field, we can measure the radii of curvature of their orbits and, knowing the magnetic field, thus determine the momenta of the particles. If we multiply both sides of Eq. (34.7) by $c$, and express $q$ in terms of the electronic charge, we can measure the momentum in units of the electron volt. In those units our formula is \begin{equation} \label{Eq:I:34:9} pc(\text{eV})=3\times10^8(q/q_e)BR, \end{equation} where $B$, $R$, and the speed of light are all expressed in the mks system, the latter being $3\times10^8$, numerically. The mks unit of magnetic field is called a weber per square meter. There is an older unit which is still in common use, called a gauss. One weber/m² is equal to $10^4$ gauss. To give an idea of how big magnetic fields are, the strongest magnetic field that one can usually make in iron is about $1.5\times10^4$ gauss; beyond that, the advantage of using iron disappears. Today, electromagnets wound with superconducting wire are able to produce steady fields of over $10^5$ gauss strength—that is, $10$ mks units. The field of the earth is a few tenths of a gauss at the equator. Returning to Eq. (34.9), we could imagine the synchrotron running at a billion electron volts, so $pc$ would be $10^9$ for a billion electron volts. (We shall come back to the energy in just a moment.) Then, if we had a $B$ corresponding to, say, $10{,}000$ gauss, which is a good substantial field, one mks unit, then we see that $R$ would have to be $3.3$ meters. The actual radius of the Caltech synchrotron is $3.7$ meters, the field is a little bigger, and the energy is $1.5$ billion, but it is the same idea. So now we have a feeling for why the synchrotron has the size it has. We have calculated the momentum, but we know that the total energy, including the rest energy, is given by $W=\sqrt{p^2c^2 + m^2c^4}$, and for an electron the rest energy corresponding to $mc^2$ is $0.511\times10^6$ eV, so when $pc$ is $10^9$ eV we can neglect $mc^2$, and so for all practical purposes $W = pc$ when the speeds are relativistic. It is practically the same to say the energy of an electron is a billion electron volts as to say the momentum times $c$ is a billion electron volts. If $W = 10^9$ eV, it is easy to show that the speed differs from the speed of light by but one part in eight million! We turn now to the radiation emitted by such a particle. A particle moving on a circle of radius $3.3$ meters, or $20$ meters circumference, goes around once in roughly the time it takes light to go $20$ meters. So the wavelength that should be emitted by such a particle would be $20$ meters—in the shortwave radio region. But because of the piling up effect that we have been discussing (Fig. 34–3), and because the distance by which we must extend the radius to reach the speed $c$ is only one part in eight million of the radius, the cusps of the curtate cycloid are enormously sharp compared with the distance between them. The acceleration, which involves a second derivative with respect to time, gets twice the “compression factor” of $8\times10^6$ because the time scale is reduced by eight million twice in the neighborhood of the cusp. Thus we might expect the effective wavelength to be much shorter, to the extent of $64$ times $10^{12}$ smaller than $20$ meters, and that corresponds to the x-ray region. (Actually, the cusp itself is not the entire determining factor; one must also include a certain region about the cusp. This changes the factor to the $3/2$ power instead of the square, but still leaves us above the optical region.) Thus, even though a slowly moving electron would have radiated $20$-meter radiowaves, the relativistic effect cuts down the wavelength so much that we can see it! Clearly, the light should be polarized, with the electric field perpendicular to the uniform magnetic field. To further appreciate what we would observe, suppose that we were to take such light (to simplify things, because these pulses are so far apart in time, we shall just take one pulse) and direct it onto a diffraction grating, which is a lot of scattering wires. After this pulse comes away from the grating, what do we see? (We should see red light, blue light, and so on, if we see any light at all.) What do we see? The pulse strikes the grating head-on, and all the oscillators in the grating, together, are violently moved up and then back down again, just once. They then produce effects in various directions, as shown in Fig. 34–5. But the point $P$ is closer to one end of the grating than to the other, so at this point the electric field arrives first from wire $A$, next from $B$, and so on; finally, the pulse from the last wire arrives. In short, the sum of the reflections from all the successive wires is as shown in Fig. 34–6(a); it is an electric field which is a series of pulses, and it is very like a sine wave whose wavelength is the distance between the pulses, just as it would be for monochromatic light striking the grating! So, we get colored light all right. But, by the same argument, will we not get light from any kind of a “pulse”? No. Suppose that the curve were much smoother; then we would add all the scattered waves together, separated by a small time between them (Fig. 34–6b). Then we see that the field would not shake at all, it would be a very smooth curve, because each pulse does not vary much in the time interval between pulses. The electromagnetic radiation emitted by relativistic charged particles circulating in a magnetic field is called synchrotron radiation. It is so named for obvious reasons, but it is not limited specifically to synchrotrons, or even to earthbound laboratories. It is exciting and interesting that it also occurs in nature!
1
34
Relativistic Effects in Radiation
4
Cosmic synchrotron radiation
In the year 1054 the Chinese and Japanese civilizations were among the most advanced in the world; they were conscious of the external universe, and they recorded, most remarkably, an explosive bright star in that year. (It is amazing that none of the European monks, writing all the books of the middle ages, even bothered to write that a star exploded in the sky, but they did not.) Today we may take a picture of that star, and what we see is shown in Fig. 34–7. On the outside is a big mass of red filaments, which is produced by the atoms of the thin gas “ringing” at their natural frequencies; this makes a bright line spectrum with different frequencies in it. The red happens in this case to be due to nitrogen. On the other hand, in the central region is a mysterious, fuzzy patch of light in a continuous distribution of frequency, i.e., there are no special frequencies associated with particular atoms. Yet this is not dust “lit up” by nearby stars, which is one way by which one can get a continuous spectrum. We can see stars through it, so it is transparent, but it is emitting light. In Fig. 34–8 we look at the same object, using light in a region of the spectrum which has no bright spectral line, so that we see only the central region. But in this case, also, polarizers have been put on the telescope, and the two views correspond to two orientations $90^\circ$ apart. We see that the pictures are different! That is to say, the light is polarized. The reason, presumably, is that there is a local magnetic field, and many very energetic electrons are going around in that magnetic field. We have just illustrated how the electrons could go around the field in a circle. We can add to this, of course, any uniform motion in the direction of the field, since the force, $q\FLPv\times\FLPB$, has no component in this direction and, as we have already remarked, the synchrotron radiation is evidently polarized in a direction at right angles to the projection of the magnetic field onto the plane of sight. Putting these two facts together, we see that in a region where one picture is bright and the other one is black, the light must have its electric field completely polarized in one direction. This means that there is a magnetic field at right angles to this direction, while in other regions, where there is a strong emission in the other picture, the magnetic field must be the other way. If we look carefully at Fig. 34–8, we may notice that there is, roughly speaking, a general set of “lines” that go one way in one picture and at right angles to this in the other. The pictures show a kind of fibrous structure. Presumably, the magnetic field lines will tend to extend relatively long distances in their own direction, and so, presumably, there are long regions of magnetic field with all the electrons spiralling one way, while in another region the field is the other way and the electrons are also spiralling that way. What keeps the electron energy so high for so long a time? After all, it is $900$ years since the explosion—how can they keep going so fast? How they maintain their energy and how this whole thing keeps going is still not thoroughly understood.
1
34
Relativistic Effects in Radiation
5
Bremsstrahlung
We shall next remark briefly on one other interesting effect of a very fast-moving particle that radiates energy. The idea is very similar to the one we have just discussed. Suppose that there are charged particles in a piece of matter and a very fast electron, say, comes by (Fig. 34–9). Then, because of the electric field around the atomic nucleus the electron is pulled, accelerated, so that the curve of its motion has a slight kink or bend in it. If the electron is travelling at very nearly the speed of light, what is the electric field produced in the direction $C$? Remember our rule: we take the actual motion, translate it backwards at speed $c$, and that gives us a curve whose curvature measures the electric field. It was coming toward us at the speed $v$, so we get a backward motion, with the whole picture compressed into a smaller distance in proportion as $c - v$ is smaller than $c$. So, if $1- v/c \ll 1$, there is a very sharp and rapid curvature at $B'$, and when we take the second derivative of that we get a very high field in the direction of the motion. So when very energetic electrons move through matter they spit radiation in a forward direction. This is called bremsstrahlung. As a matter of fact, the synchrotron is used, not so much to make high-energy electrons (actually if we could get them out of the machine more conveniently we would not say this) as to make very energetic photons—gamma rays—by passing the energetic electrons through a solid tungsten “target,” and letting them radiate photons from this bremsstrahlung effect.
1
34
Relativistic Effects in Radiation
6
The Doppler effect
Now we go on to consider some other examples of the effects of moving sources. Let us suppose that the source is a stationary atom which is oscillating at one of its natural frequencies, $\omega_0$. Then we know that the frequency of the light we would observe is $\omega_0$. But now let us take another example, in which we have a similar oscillator oscillating with a frequency $\omega_1$, and at the same time the whole atom, the whole oscillator, is moving along in a direction toward the observer at velocity $v$. Then the actual motion in space, of course, is as shown in Fig. 34–10(a). Now we play our usual game, we add $c\tau$; that is to say, we translate the whole curve backward and we find then that it oscillates as in Fig. 34–10(b). In a given amount of time $\tau$, when the oscillator would have gone a distance $v\tau$, on the $x'$ vs. $ct$ diagram it goes a distance $(c - v)\tau$. So all the oscillations of frequency $\omega_1$ in the time $\Delta\tau$ are now found in the interval $\Delta t=(1-v/c)\,\Delta\tau$; they are squashed together, and as this curve comes by us at speed $c$, we will see light of a higher frequency, higher by just the compression factor $(1 - v/c)$. Thus we observe \begin{equation} \label{Eq:I:34:10} \omega=\frac{\omega_1}{1-v/c}. \end{equation} We can, of course, analyze this situation in various other ways. Suppose that the atom were emitting, instead of sine waves, a series of pulses, pip, pip, pip, pip, at a certain frequency $\omega_1$. At what frequency would they be received by us? The first one that arrives has a certain delay, but the next one is delayed less because in the meantime the atom moves closer to the receiver. Therefore, the time between the “pips” is decreased by the motion. If we analyze the geometry of the situation, we find that the frequency of the pips is increased by the factor $1/(1 - v/c)$. Is $\omega = \omega_0/(1 - v/c)$, then, the frequency that would be observed if we took an ordinary atom, which had a natural frequency $\omega_0$, and moved it toward the receiver at speed $v$? No; as we well know, the natural frequency $\omega_1$ of a moving atom is not the same as that measured when it is standing still, because of the relativistic dilation in the rate of passage of time. Thus if $\omega_0$ were the true natural frequency, then the modified natural frequency $\omega_1$ would be \begin{equation} \label{Eq:I:34:11} \omega_1=\omega_0\sqrt{1-v^2/c^2}. \end{equation} Therefore the observed frequency $\omega$ is \begin{equation} \label{Eq:I:34:12} \omega=\frac{\omega_0\sqrt{1-v^2/c^2}}{1-v/c}. \end{equation} The shift in frequency observed in the above situation is called the Doppler effect: if something moves toward us the light it emits appears more violet, and if it moves away it appears more red. We shall now give two more derivations of this same interesting and important result. Suppose, now, that the source is standing still and is emitting waves at frequency $\omega_0$, while the observer is moving with speed $v$ toward the source. After a certain period of time $t$ the observer will have moved to a new position, a distance $vt$ from where he was at $t = 0$. How many radians of phase will he have seen go by? A certain number, $\omega_0t$, went past any fixed point, and in addition the observer has swept past some more by his own motion, namely a number $vtk_0$ (the number of radians per meter times the distance). So the total number of radians in the time $t$, or the observed frequency, would be $\omega_1 = \omega_0 + k_0v$. We have made this analysis from the point of view of a man at rest; we would like to know how it would look to the man who is moving. Here we have to worry again about the difference in clock rate for the two observers, and this time that means that we have to divide by $\sqrt{1 - v^2/c^2}$. So if $k_0$ is the wave number, the number of radians per meter in the direction of motion, and $\omega_0$ is the frequency, then the observed frequency for a moving man is \begin{equation} \label{Eq:I:34:13} \omega=\frac{\omega_0+k_0v}{\sqrt{1-v^2/c^2}}. \end{equation} For the case of light, we know that $k_0 = \omega_0/c$. So, in this particular problem, the equation would read \begin{equation} \label{Eq:I:34:14} \omega=\frac{\omega_0(1+v/c)}{\sqrt{1-v^2/c^2}}, \end{equation} which looks completely unlike formula (34.12)! Is the frequency that we would observe if we move toward a source different than the frequency that we would see if the source moved toward us? Of course not! The theory of relativity says that these two must be exactly equal. If we were expert enough mathematicians we would probably recognize that these two mathematical expressions are exactly equal! In fact, the necessary equality of the two expressions is one of the ways by which some people like to demonstrate that relativity requires a time dilation, because if we did not put those square-root factors in, they would no longer be equal. Since we know about relativity, let us analyze it in still a third way, which may appear a little more general. (It is really the same thing, since it makes no difference how we do it!) According to the relativity theory there is a relationship between position and time as observed by one man and position and time as seen by another who is moving relative to him. We wrote down those relationships long ago (Chapter 16). This is the Lorentz transformation and its inverse: \begin{equation} \begin{alignedat}{3} x'&=\frac{x+vt}{\sqrt{1-v^2/c^2}},&&\quad x&&=\frac{x'-vt'}{\sqrt{1-v^2/c^2}},\\[1ex] t'&=\frac{t+vx/c^2}{\sqrt{1-v^2/c^2}},&&\quad t&&=\frac{t'-vx'/c^2}{\sqrt{1-v^2/c^2}}. \end{alignedat} \label{Eq:I:34:15} \end{equation} If we were standing still on the ground, the form of a wave would be $\cos\,(\omega t - kx)$; all the nodes and maxima and minima would follow this form. But what would a man in motion, observing the same physical wave, see? Where the field is zero, the positions of all the nodes are the same (when the field is zero, everyone measures the field as zero); that is a relativistic invariant. So the form is the same for the other man too, except that we must transform it into his frame of reference: \begin{equation*} \cos\,(\omega t - kx)=\cos\biggl[ \omega\,\frac{t'-vx'/c^2}{\sqrt{1-v^2/c^2}}-k\, \frac{x'-vt'}{\sqrt{1-v^2/c^2}}\biggr]. \end{equation*} If we regroup the terms inside the brackets, we get \begin{alignat}{4} \cos\,(\omega t - kx) &=\cos \biggl[ \displaystyle\underbrace{\frac{\omega+kv}{\sqrt{1-v^2/c^2}}} \,&t'&-& \displaystyle\underbrace{\frac{k+v\omega/c^2}{\sqrt{1-v^2/c^2}}} \,&x'& \biggr]\notag\\ \label{Eq:I:34:16} &=\cos [ \kern{2.5em}\omega' \,&t'&-& k'\kern{2.3em} \,&x'& ]. \end{alignat} \begin{alignat}{2} \cos\,(\omega t - kx) &=\cos \biggl[ \displaystyle\underbrace{\frac{\omega+kv}{\sqrt{1-v^2/c^2}}\,t'} &-& \displaystyle\underbrace{\frac{k+v\omega/c^2}{\sqrt{1-v^2/c^2}}\,x'} \biggr]\notag\\[1ex] \label{Eq:I:34:16} &=\kern{2.5em}\cos\,[\omega't'&-&\kern{2.3em}k'x']. \end{alignat} This is again a wave, a cosine wave, in which there is a certain frequency $\omega'$, a constant multiplying $t'$, and some other constant, $k'$, multiplying $x'$. We call $k'$ the wave number, or the number of waves per meter, for the other man. Therefore the other man will see a new frequency and a new wave number given by \begin{align} \label{Eq:I:34:17} \omega'&=\frac{\omega+kv}{\sqrt{1-v^2/c^2}},\\[1ex] \label{Eq:I:34:18} k'&=\frac{k+\omega v/c^2}{\sqrt{1-v^2/c^2}}. \end{align} If we look at (34.17), we see that it is the same formula (34.13), that we obtained by a more physical argument.
1
34
Relativistic Effects in Radiation
7
The $\boldsymbol{\omega, k}$ four-vector
The relationships indicated in Eqs. (34.17) and (34.18) are very interesting, because these say that the new frequency $\omega'$ is a combination of the old frequency $\omega$ and the old wave number $k$, and that the new wave number is a combination of the old wave number and frequency. Now the wave number is the rate of change of phase with distance, and the frequency is the rate of change of phase with time, and in these expressions we see a close analogy with the Lorentz transformation of the position and time: if $\omega$ is thought of as being like $t$, and $k$ is thought of as being like $x$ divided by $c^2$, then the new $\omega'$ will be like $t'$, and the new $k'$ will be like $x'/c^2$. That is to say, under the Lorentz transformation $\omega$ and $k$ transform the same way as do $t$ and $x$. They constitute what we call a four-vector; when a quantity has four components transforming like time and space, it is a four-vector. Everything seems all right, then, except for one little thing: we said that a four-vector has to have four components; where are the other two components? We have seen that $\omega$ and $k$ are like time and space in one space direction, but not in all directions, and so we must next study the problem of the propagation of light in three space dimensions, not just in one direction, as we have been doing up until now. Suppose that we have a coordinate system, $x$, $y$, $z$, and a wave which is travelling along and whose wavefronts are as shown in Fig. 34–11. The wavelength of the wave is $\lambda$, but the direction of motion of the wave does not happen to be in the direction of one of the axes. What is the formula for such a wave? The answer is clearly $\cos\,(\omega t - ks)$, where $k = 2\pi/\lambda$ and $s$ is the distance along the direction of motion of the wave—the component of the spatial position in the direction of motion. Let us put it this way: if $\FLPr$ is the vector position of a point in space, then $s$ is $\FLPr\cdot \FLPe_k$, where $\FLPe_k$ is a unit vector in the direction of motion. That is, $s$ is just $r\cos\,(\FLPr, \FLPe_k)$, the component of distance in the direction of motion. Therefore our wave is $\cos\,(\omega t - k\FLPe_k\cdot\FLPr)$. Now it turns out to be very convenient to define a vector $\FLPk$, which is called the wave vector, which has a magnitude equal to the wave number, $2\pi/\lambda$, and is pointed in the direction of propagation of the waves: \begin{equation} \label{Eq:I:34:19} \FLPk=2\pi\FLPe_k/\lambda=k\FLPe_k. \end{equation} Using this vector, our wave can be written as $\cos\,(\omega t - \FLPk\cdot\FLPr)$, or as $\cos\,(\omega t - k_xx - k_yy - k_zz)$. What is the significance of a component of $\FLPk$, say $k_x$? Clearly, $k_x$ is the rate of change of phase with respect to $x$. Referring to Fig. 34–11, we see that the phase changes as we change $x$, just as if there were a wave along $x$, but of a longer wavelength. The “wavelength in the $x$-direction” is longer than a natural, true wavelength by the secant of the angle $\alpha$ between the actual direction of propagation and the $x$-axis: \begin{equation} \label{Eq:I:34:20} \lambda_x=\lambda/\cos\alpha. \end{equation} Therefore the rate of change of phase, which is proportional to the reciprocal of $\lambda_x$, is smaller by the factor $\cos\alpha$; that is just how $k_x$ would vary—it would be the magnitude of $\FLPk$, times the cosine of the angle between $\FLPk$ and the $x$-axis! That, then, is the nature of the wave vector that we use to represent a wave in three dimensions. The four quantities $\omega$, $k_x$, $k_y$, $k_z$ transform in relativity as a four-vector, where $\omega$ corresponds to the time, and $k_x$, $k_y$, $k_z$ correspond to the $x$-, $y$-, and $z$-components of the four-vector. In our previous discussion of special relativity (Chapter 17), we learned that there are ways of making relativistic dot products with four-vectors. If we use the position vector $x_\mu$, where $\mu$ stands for the four components (time and three space ones), and if we call the wave vector $k_\mu$, where the index $\mu$ again has four values, time and three space ones, then the dot product of $x_\mu$ and $k_\mu$ is written $\sum'k_\mu x_\mu$ (see Chapter 17). This dot product is an invariant, independent of the coordinate system; what is it equal to? By the definition of this dot product in four dimensions, it is \begin{equation} \label{Eq:I:34:21} \sideset{}{'}\sum k_\mu x_\mu = \omega t-k_xx-k_yy-k_zz. \end{equation} We know from our study of vectors that $\sum'k_\mu x_\mu$ is invariant under the Lorentz transformation, since $k_\mu$ is a four-vector. But this quantity is precisely what appears inside the cosine for a plane wave, and it ought to be invariant under a Lorentz transformation. We cannot have a formula with something that changes inside the cosine, since we know that the phase of the wave cannot change when we change the coordinate system.
1
34
Relativistic Effects in Radiation
8
Aberration
In deriving Eqs. (34.17) and (34.18), we have taken a simple example where $\FLPk$ happened to be in a direction of motion, but of course we can generalize it to other cases also. For example, suppose there is a source sending out light in a certain direction from the point of view of a man at rest, but we are moving along on the earth, say (Fig. 34–12). From which direction does the light appear to come? To find out, we will have to write down the four components of $k_\mu$ and apply the Lorentz transformation. The answer, however, can be found by the following argument: we have to point our telescope at an angle to see the light. Why? Because light is coming down at the speed $c$, and we are moving sidewise at the speed $v$, so the telescope has to be tilted forward so that as the light comes down it goes “straight” down the tube. It is very easy to see that the horizontal distance is $vt$ when the vertical distance is $ct$, and therefore, if $\theta'$ is the angle of tilt, $\tan\theta' = v/c$. How nice! How nice, indeed—except for one little thing: $\theta'$ is not the angle at which we would have to set the telescope relative to the earth, because we made our analysis from the point of view of a “fixed” observer. When we said the horizontal distance is $vt$, the man on the earth would have found a different distance, since he measured with a “squashed” ruler. It turns out that, because of that contraction effect, \begin{equation} \label{Eq:I:34:22} \tan\theta=\frac{v/c}{\sqrt{1-v^2/c^2}}, \end{equation} which is equivalent to \begin{equation} \label{Eq:I:34:23} \sin\theta=v/c. \end{equation} It will be instructive for the student to derive this result, using the Lorentz transformation. This effect, that a telescope has to be tilted, is called aberration, and it has been observed. How can we observe it? Who can say where a given star should be? Suppose we do have to look in the wrong direction to see a star; how do we know it is the wrong direction? Because the earth goes around the sun. Today we have to point the telescope one way; six months later we have to tilt the telescope the other way. That is how we can tell that there is such an effect.
1
34
Relativistic Effects in Radiation
9
The momentum of light
Now we turn to a different topic. We have never, in all our discussion of the past few chapters, said anything about the effects of the magnetic field that is associated with light. Ordinarily, the effects of the magnetic field are very small, but there is one interesting and important effect which is a consequence of the magnetic field. Suppose that light is coming from a source and is acting on a charge and driving that charge up and down. We will suppose that the electric field is in the $x$-direction, so the motion of the charge is also in the $x$-direction: it has a position $x$ and a velocity $v$, as shown in Fig. 34–13. The magnetic field is at right angles to the electric field. Now as the electric field acts on the charge and moves it up and down, what does the magnetic field do? The magnetic field acts on the charge (say an electron) only when it is moving; but the electron is moving, it is driven by the electric field, so the two of them work together: While the thing is going up and down it has a velocity and there is a force on it, $B$ times $v$ times $q$; but in which direction is this force? It is in the direction of the propagation of light. Therefore, when light is shining on a charge and it is oscillating in response to that light, there is a driving force in the direction of the light beam. This is called radiation pressure or light pressure. Let us determine how strong the radiation pressure is. Evidently it is $F= qvB$ or, since everything is oscillating, it is the time average of this, $\avg{F}$. From (34.2) the strength of the magnetic field is the same as the strength of the electric field divided by $c$, so we need to find the average of the electric field, times the velocity, times the charge, times $1/c$: $\avg{F} = q\avg{vE}/c$. But the charge $q$ times the field $E$ is the electric force on a charge, and the force on the charge times the velocity is the work $dW/dt$ being done on the charge! Therefore the force, the “pushing momentum,” that is delivered per second by the light, is equal to $1/c$ times the energy absorbed from the light per second! That is a general rule, since we did not say how strong the oscillator was, or whether some of the charges cancel out. In any circumstance where light is being absorbed, there is a pressure. The momentum that the light delivers is always equal to the energy that is absorbed, divided by $c$: \begin{equation} \label{Eq:I:34:24} \avg{F} = \frac{dW/dt}{c}. \end{equation} That light carries energy we already know. We now understand that it also carries momentum, and further, that the momentum carried is always $1/c$ times the energy. When light is emitted from a source there is a recoil effect: the same thing in reverse. If an atom is emitting an energy $W$ in some direction, then there is a recoil momentum $p = W/c$. If light is reflected normally from a mirror, we get twice the force. That is as far as we shall go using the classical theory of light. Of course we know that there is a quantum theory, and that in many respects light acts like a particle. The energy of a light-particle is a constant times the frequency: \begin{equation} \label{Eq:I:34:25} W=h\nu=\hbar\omega. \end{equation} We now appreciate that light also carries a momentum equal to the energy divided by $c$, so it is also true that these effective particles, these photons, carry a momentum \begin{equation} \label{Eq:I:34:26} p=W/c=\hbar\omega/c=\hbar k. \end{equation} The direction of the momentum is, of course, the direction of propagation of the light. So, to put it in vector form, \begin{equation} \label{Eq:I:34:27} W=\hbar\omega,\quad \FLPp=\hbar\FLPk. \end{equation} We also know, of course, that the energy and momentum of a particle should form a four-vector. We have just discovered that $\omega$ and $\FLPk$ form a four-vector. Therefore it is a good thing that (34.27) has the same constant in both cases; it means that the quantum theory and the theory of relativity are mutually consistent. Equation (34.27) can be written more elegantly as $p_\mu = \hbar k_\mu$, a relativistic equation, for a particle associated with a wave. Although we have discussed this only for photons, for which $k$ (the magnitude of $\FLPk$) equals $\omega/c$ and $p = W/c$, the relation is much more general. In quantum mechanics all particles, not only photons, exhibit wavelike properties, but the frequency and wave number of the waves is related to the energy and momentum of particles by (34.27) (called the de Broglie relations) even when $p$ is not equal to $W/c$. In the last chapter we saw that a beam of right or left circularly polarized light also carries angular momentum in an amount proportional to the energy $\energy$ of the wave. In the quantum picture, a beam of circularly polarized light is regarded as a stream of photons, each carrying an angular momentum $\pm\hbar$, along the direction of propagation. That is what becomes of polarization in the corpuscular point of view—the photons carry angular momentum like spinning rifle bullets. But this “bullet” picture is really as incomplete as the “wave” picture, and we shall have to discuss these ideas more fully in a later chapter on Quantum Behavior.
1
35
Color Vision
1
The human eye
The phenomenon of colors depends partly on the physical world. We discuss the colors of soap films and so on as being produced by interference. But also, of course, it depends on the eye, or what happens behind the eye, in the brain. Physics characterizes the light that enters the eye, but after that, our sensations are the result of photochemical-neural processes and psychological responses. There are many interesting phenomena associated with vision which involve a mixture of physical phenomena and physiological processes, and the full appreciation of natural phenomena, as we see them, must go beyond physics in the usual sense. We make no apologies for making these excursions into other fields, because the separation of fields, as we have emphasized, is merely a human convenience, and an unnatural thing. Nature is not interested in our separations, and many of the interesting phenomena bridge the gaps between fields. In Chapter 3 we have already discussed the relation of physics to the other sciences in general terms, but now we are going to look in some detail at a specific field in which physics and other sciences are very, very closely interrelated. That area is vision. In particular, we shall discuss color vision. In the present chapter we shall discuss mainly the observable phenomena of human vision, and in the next chapter we shall consider the physiological aspects of vision, both in man and in other animals. It all begins with the eye; so, in order to understand what phenomena we see, some knowledge of the eye is required. In the next chapter we shall discuss in some detail how the various parts of the eye work, and how they are interconnected with the nervous system. For the present, we shall describe only briefly how the eye functions (Fig. 35–1). Light enters the eye through the cornea; we have already discussed how it is bent and is imaged on a layer called the retina in the back of the eye, so that different parts of the retina receive light from different parts of the visual field outside. The retina is not absolutely uniform: there is a place, a spot, in the center of our field of view which we use when we are trying to see things very carefully, and at which we have the greatest acuity of vision; it is called the fovea or macula. The side parts of the eye, as we can immediately appreciate from our experience in looking at things, are not as effective for seeing detail as is the center of the eye. There is also a spot in the retina where the nerves carrying all the information run out; that is a blind spot. There is no sensitive part of the retina here, and it is possible to demonstrate that if we close, say, the left eye and look straight at something, and then move a finger or another small object slowly out of the field of view it suddenly disappears somewhere. The only practical use of this fact that we know of is that some physiologist became quite a favorite in the court of a king of France by pointing this out to him; in the boring sessions that he had with his courtiers, the king could amuse himself by “cutting off their heads” by looking at one and watching another’s head disappear. Figure 35–2 shows a magnified view of the inside of the retina in somewhat schematic form. In different parts of the retina there are different kinds of structures. The objects that occur more densely near the periphery of the retina are called rods. Closer to the fovea, we find, besides these rod cells, also cone cells. We shall describe the structure of these cells later. As we get close to the fovea, the number of cones increases, and in the fovea itself there are in fact nothing but cone cells, packed very tightly, so tightly that the cone cells are much finer, or narrower here than anywhere else. So we must appreciate that we see with the cones right in the middle of the field of view, but as we go to the periphery we have the other cells, the rods. Now the interesting thing is that in the retina each of the cells which is sensitive to light is not connected by a fiber directly to the optic nerve, but is connected to many other cells, which are themselves connected to each other. There are several kinds of cells: there are cells that carry the information toward the optic nerve, but there are others that are mainly interconnected “horizontally.” There are essentially four kinds of cells, but we shall not go into these details now. The main thing we emphasize is that the light signal is already being “thought about.” That is to say, the information from the various cells does not immediately go to the brain, spot for spot, but in the retina a certain amount of the information has already been digested, by a combining of the information from several visual receptors. It is important to understand that some brain-function phenomena occur in the eye itself.
1
35
Color Vision
2
Color depends on intensity
One of the most striking phenomena of vision is the dark adaptation of the eye. If we go into the dark from a brightly lighted room, we cannot see very well for a while, but gradually things become more and more apparent, and eventually we can see something where we could see nothing before. If the intensity of the light is very low, the things that we see have no color. It is known that this dark-adapted vision is almost entirely due to the rods, while the vision in bright light is due to the cones. As a result, there are a number of phenomena that we can easily appreciate because of this transfer of function from the cones and rods together, to just the rods. There are many situations in which, if the light intensity were stronger, we could see color, and we would find these things quite beautiful. One example is that through a telescope we nearly always see “black and white” images of faint nebulae, but W. C. Miller of the Mt. Wilson and Palomar Observatories had the patience to make color pictures of some of these objects. Nobody has ever really seen these colors with the eye, but they are not artificial colors, it is merely that the light intensity is not strong enough for the cones in our eye to see them. Among the more spectacular such objects are the ring nebula and the Crab nebula. The former shows a beautiful blue inner part, with a bright red outer halo, and the latter shows a general bluish haze permeated by bright red-orange filaments. In the bright light, apparently, the rods are at very low sensitivity but, in the dark, as time goes on they pick up their ability to see light. The variations in light intensity for which one can adapt is over a million to one. Nature does not do all this with just one kind of cell, but she passes her job from bright-light-seeing cells, the color-seeing cells, the cones, to low-intensity, dark-adapted cells, the rods. Among the interesting consequences of this shift is, first, that there is no color, and second, that there is a difference in the relative brightness of differently colored objects. It turns out that the rods see better toward the blue than the cones do, and the cones can see, for example, deep red light, while the rods find that absolutely impossible to see. So red light is black so far as the rods are concerned. Thus two pieces of colored paper, say blue and red, in which the red might be even brighter than the blue in good light, will, in the dark, appear completely reversed. It is a very striking effect. If we are in the dark and can find a magazine or something that has colors and, before we know for sure what the colors are, we judge the lighter and darker areas, and if we then carry the magazine into the light, we may see this very remarkable shift between which was the brightest color and which was not. The phenomenon is called the Purkinje effect. In Fig. 35–3, the dashed curve represents the sensitivity of the eye in the dark, i.e., using the rods, while the solid curve represents it in the light. We see that the peak sensitivity of the rods is in the green region and that of the cones is more in the yellow region. If there is a red-colored page (red is about $650$ m$\mu$) we can see it if it is brightly lighted, but in the dark it is almost invisible. Another effect of the fact that rods take over in the dark, and that there are no rods in the fovea, is that when we look straight at something in the dark, our vision is not quite as acute as when we look to one side. A faint star or nebula can sometimes be seen better by looking a little to one side than directly at it, because we do not have sensitive rods in the middle of the fovea. Another interesting effect of the fact that the number of cones decreases as we go farther to the side of the field of view is that even in a bright light color disappears as the object goes far to one side. The way to test that is to look in some particular fixed direction, let a friend walk in from one side with colored cards, and try to decide what color they are before they are right in front of you. One finds that he can see that the cards are there long before he can determine the color. When doing this, it is advisable to come in from the side opposite the blind spot, because it is otherwise rather confusing to almost see the color, then not see anything, then to see the color again. Another interesting phenomenon is that the periphery of the retina is very sensitive to motion. Although we cannot see very well from the corner of our eye, if a little bug moves and we do not expect anything to be moving over there, we are immediately sensitive to it. We are all “wired up” to look for something jiggling to the side of the field.
1
35
Color Vision
3
Measuring the color sensation
Now we go to the cone vision, to the brighter vision, and we come to the question which is most characteristic of cone vision, and that is color. As we know, white light can be split by a prism into a whole spectrum of wavelengths which appear to us to have different colors; that is what colors are, of course: appearances. Any source of light can be analyzed by a grating or a prism, and one can determine the spectral distribution, i.e., the “amount” of each wavelength. A certain light may have a lot of blue, considerable red, very little yellow, and so on. That is all very precise in the sense of physics, but the question is, what color will it appear to be? It is evident that the different colors depend somehow upon the spectral distribution of the light, but the problem is to find what characteristics of the spectral distribution produce the various sensations. For example, what do we have to do to get a green color? We all know that we can simply take a piece of the spectrum which is green. But is that the only way to get green, or orange, or any other color? Is there more than one spectral distribution which produces the same apparent visual effect? The answer is, definitely yes. There is a very limited number of visual effects, in fact just a three-dimensional manifold of them, as we shall shortly see, but there is an infinite number of different curves that we can draw for the light that comes from different sources. Now the question we have to discuss is, under what conditions do different distributions of light appear as exactly the same color to the eye? The most powerful psycho-physical technique in color judgment is to use the eye as a null instrument. That is, we do not try to define what constitutes a green sensation, or to measure in what circumstances we get a green sensation, because it turns out that this is extremely complicated. Instead, we study the conditions under which two stimuli are indistinguishable. Then we do not have to decide whether two people see the same sensation in different circumstances, but only whether, if for one person two sensations are the same, they are also the same for another. We do not have to decide whether, when one sees something green, what it feels like inside is the same as what it feels like inside someone else when he sees something green; we do not know anything about that. To illustrate the possibilities, we may use a series of four projector lamps which have filters on them, and whose brightnesses are continuously adjustable over a wide range: one has a red filter and makes a spot of red light on the screen, the next one has a green filter and makes a green spot, the third one has a blue filter, and the fourth one is a white circle with a black spot in the middle of it. Now if we turn on some red light, and next to it put some green, we see that in the area of overlap it produces a sensation which is not what we call reddish green, but a new color, yellow in this particular case. By changing the proportions of the red and the green, we can go through various shades of orange and so forth. If we have set it for a certain yellow, we can also obtain that same yellow, not by mixing these two colors but by mixing some other ones, perhaps a yellow filter with white light, or something like that, to get the same sensation. In other words, it is possible to make various colors in more than one way by mixing the lights from various filters. What we have just discovered may be expressed analytically as follows. A particular yellow, for example, can be represented by a certain symbol $Y$, which is the “sum” of certain amounts of red-filtered light ($R$) and green-filtered light ($G$). By using two numbers, say $r$ and $g$, to describe how bright the $R$ and $G$ are, we can write a formula for this yellow: \begin{equation} \label{Eq:I:35:1} Y = rR + gG. \end{equation} The question is, can we make all the different colors by adding together two or three lights of different, fixed colors? Let us see what can be done in that connection. We certainly cannot get all the different colors by mixing only red and green, because, for instance, blue never appears in such a mixture. However, by putting in some blue the central region, where all three spots overlap, may be made to appear to be a fairly nice white. By mixing the various colors and looking at this central region, we find that we can get a considerable range of colors in that region by changing the proportions, and so it is not impossible that all the colors can be made by mixing these three colored lights. We shall discuss to what extent this is true; it is in fact essentially correct, and we shall shortly see how to define the proposition better. In order to illustrate our point, we move the spots on the screen so that they all fall on top of each other, and then we try to match a particular color which appears in the annular ring made by the fourth lamp. What we once thought was “white” coming from the fourth lamp now appears yellowish. We may try to match that by adjusting the red and green and blue as best we can by a kind of trial and error, and we find that we can approach rather closely this particular shade of “cream” color. So it is not hard to believe that we can make all colors. We shall try to make yellow in a moment, but before we do that, there is one color that might be very hard to make. People who give lectures on color make all the “bright” colors, but they never make brown, and it is hard to recall ever having seen brown light. As a matter of fact, this color is never used for any stage effect, one never sees a spotlight with brown light; so we think it might be impossible to make brown. In order to find out whether it is possible to make brown, we point out that brown light is merely something that we are not used to seeing without its background. As a matter of fact, we can make it by mixing some red and yellow. To prove that we are looking at brown light, we merely increase the brightness of the annular background against which we see the very same light, and we see that that is, in fact, what we call brown! Brown is always a dark color next to a lighter background. We can easily change the character of the brown. For example, if we take some green out we get a reddish brown, apparently a chocolaty reddish brown, and if we put more green into it, in proportion, we get that horrible color which all the uniforms of the Army are made of, but the light from that color is not so horrible by itself; it is of yellowish green, but seen against a light background. Now we put a yellow filter in front of the fourth light and try to match that. (The intensity must of course be within the range of the various lamps; we cannot match something which is too bright, because we do not have enough power in the lamp.) But we can match the yellow; we use a green and red mixture, and put in a touch of blue to make it even more perfect. Perhaps we are ready to believe that, under good conditions, we can make a perfect match of any given color. Now let us discuss the laws of color mixture. In the first place, we found that different spectral distributions can produce the same color; next, we saw that “any” color can be made by adding together three special colors, red, blue, and green. The most interesting feature of color mixing is this: if we have a certain light, which we may call $X$, and if it appears indistinguishable from $Y$, to the eye (it may be a different spectral distribution, but it appears indistinguishable), we call these colors “equal,” in the sense that the eye sees them as equal, and we write \begin{equation} \label{Eq:I:35:2} X=Y. \end{equation} Here is one of the great laws of color: if two spectral distributions are indistinguishable, and we add to each one a certain light, say $Z$ (if we write $X + Z$, this means that we shine both lights on the same patch), and then we take $Y$ and add the same amount of the same other light, $Z$, the new mixtures are also indistinguishable: \begin{equation} \label{Eq:I:35:3} X+Z=Y+Z. \end{equation} We have just matched our yellow; if we now shine pink light on the whole thing, it will still match. So adding any other light to the matched lights leaves a match. In other words, we can summarize all these color phenomena by saying that once we have a match between two colored lights, seen next to each other in the same circumstances, then this match will remain, and one light can be substituted for the other light in any other color mixing situation. In fact, it turns out, and it is very important and interesting, that this matching of the color of lights is not dependent upon the characteristics of the eye at the moment of observation: we know that if we look for a long time at a bright red surface, or a bright red light, and then look at a white paper, it looks greenish, and other colors are also distorted by our having looked so long at the bright red. If we now have a match between, say, two yellows, and we look at them and make them match, then we look at a bright red surface for a long time, and then turn back to the yellow, it may not look yellow any more; I do not know what color it will look, but it will not look yellow. Nevertheless the yellows will still look matched, and so, as the eye adapts to various levels of intensity, the color match still works, with the obvious exception of when we go into the region where the intensity of the light gets so low that we have shifted from cones to rods; then the color match is no longer a color match, because we are using a different system. The second principle of color mixing of lights is this: any color at all can be made from three different colors, in our case, red, green, and blue lights. By suitably mixing the three together we can make anything at all, as we demonstrated with our two examples. Further, these laws are very interesting mathematically. For those who are interested in the mathematics of the thing, it turns out as follows. Suppose that we take our three colors, which were red, green, and blue, but label them $A$, $B$, and $C$, and call them our primary colors. Then any color could be made by certain amounts of these three: say an amount $a$ of color $A$, an amount $b$ of color $B$, and an amount $c$ of color $C$ makes $X$: \begin{equation} \label{Eq:I:35:4} X=aA+bB+cC. \end{equation} Now suppose another color $Y$ is made from the same three colors: \begin{equation} \label{Eq:I:35:5} Y=a'A+b'B+c'C. \end{equation} Then it turns out that the mixture of the two lights (it is one of the consequences of the laws that we have already mentioned) is obtained by taking the sum of the components of $X$ and $Y$: \begin{equation} \label{Eq:I:35:6} Z=X+Y=(a+a')A+(b+b')B+(c+c')C. \end{equation} \begin{gather} \label{Eq:I:35:6} Z=X+Y\\[.5ex] =(a+a')A+(b+b')B+(c+c')C.\notag \end{gather} It is just like the mathematics of the addition of vectors, where $(a,b,c)$ are the components of one vector, and $(a',b',c')$ are those of another vector, and the new light $Z$ is then the “sum” of the vectors. This subject has always appealed to physicists and mathematicians. In fact, Schrödinger wrote a wonderful paper on color vision in which he developed this theory of vector analysis as applied to the mixing of colors.1 Now a question is, what are the correct primary colors to use? There is no such thing as “the” correct primary colors for the mixing of lights. There may be, for practical purposes, three paints that are more useful than others for getting a greater variety of mixed pigments, but we are not discussing that matter now. Any three differently colored lights whatsoever2 can always be mixed in the correct proportion to produce any color whatsoever. Can we demonstrate this fantastic fact? Instead of using red, green, and blue, let us use red, blue, and yellow in our projector. Can we use red, blue, and yellow to make, say, green? By mixing these three colors in various proportions, we get quite an array of different colors, ranging over quite a spectrum. But as a matter of fact, after a lot of trial and error, we find that nothing ever looks like green. The question is, can we make green? The answer is yes. How? By projecting some red onto the green, then we can make a match with a certain mixture of yellow and blue! So we have matched them, except that we had to cheat by putting the red on the other side. But since we have some mathematical sophistication, we can appreciate that what we really showed was not that $X$ could always be made, say, of red, blue, and yellow, but by putting the red on the other side we found that red plus $X$ could be made out of blue and yellow. Putting it on the other side of the equation, we can interpret that as a negative amount, so if we will allow that the coefficients in equations like (35.4) can be both positive and negative, and if we interpret negative amounts to mean that we have to add those to the other side, then any color can be matched by any three, and there is no such thing as “the” fundamental primaries. We may ask whether there are three colors that come only with positive amounts for all mixings. The answer is no. Every set of three primaries requires negative amounts for some colors, and therefore there is no unique way to define a primary. In elementary books they are said to be red, green, and blue, but that is merely because with these a wider range of colors is available without minus signs for some of the combinations.
1
35
Color Vision
4
The chromaticity diagram
Now let us discuss the combination of colors on a mathematical level as a geometrical proposition. If any one color is represented by Eq. (35.4), we can plot it as a vector in space by plotting along three axes the amounts $a$, $b$, and $c$, and then a certain color is a point. If another color is $a'$, $b'$, $c'$, that color is located somewhere else. The sum of the two, as we know, is the color which comes from adding these as vectors. We can simplify this diagram and represent everything on a plane by the following observation: if we had a certain color light, and merely doubled $a$ and $b$ and $c$, that is, if we make them all stronger in the same ratio, it is the same color, but brighter. So if we agree to reduce everything to the same light intensity, then we can project everything onto a plane, and this has been done in Fig. 35–4. It follows that any color obtained by mixing a given two in some proportion will lie somewhere on a line drawn between the two points. For instance, a fifty-fifty mixture would appear halfway between them, and $1/4$ of one and $3/4$ of the other would appear $1/4$ of the way from one point to the other, and so on. If we use a blue and a green and a red, as primaries, we see that all the colors that we can make with positive coefficients are inside the dotted triangle, which contains almost all of the colors that we can ever see, because all the colors that we can ever see are enclosed in the oddly shaped area bounded by the curve. Where did this area come from? Once somebody made a very careful match of all the colors that we can see against three special ones. But we do not have to check all colors that we can see, we only have to check the pure spectral colors, the lines of the spectrum. Any light can be considered as a sum of various positive amounts of various pure spectral colors—pure from the physical standpoint. A given light will have a certain amount of red, yellow, blue, and so on—spectral colors. So if we know how much of each of our three chosen primaries is needed to make each of these pure components, we can calculate how much of each is needed to make our given color. So, if we find out what the color coefficients of all the spectral colors are for any given three primary colors, then we can work out the whole color mixing table. An example of such experimental results for mixing three lights together is given in Fig. 35–5. This figure shows the amount of each of three different particular primaries, red, green and blue, which is required to make each of the spectral colors. Red is at the left end of the spectrum, yellow is next, and so on, all the way to blue. Notice that at some points minus signs are necessary. It is from such data that it is possible to locate the position of all of the colors on a chart, where the $x$- and the $y$-coordinates are related to the amounts of the different primaries that are used. That is the way that the curved boundary line has been found. It is the locus of the pure spectral colors. Now any other color can be made by adding spectral lines, of course, and so we find that anything that can be produced by connecting one part of this curve to another is a color that is available in nature. The straight line connects the extreme violet end of the spectrum with the extreme red end. It is the locus of the purples. Inside the boundary are colors that can be made with lights, and outside it are colors that cannot be made with lights, and nobody has ever seen them (except, possibly, in after-images!).
1
35
Color Vision
5
The mechanism of color vision
Now the next aspect of the matter is the question, why do colors behave in this way? The simplest theory, proposed by Young and Helmholtz, supposes that in the eye there are three different pigments which receive the light and that these have different absorption spectra, so that one pigment absorbs strongly, say, in the red, another absorbs strongly in the blue, another absorbs in the green. Then when we shine a light on them we will get different amounts of absorptions in the three regions, and these three pieces of information are somehow maneuvered in the brain or in the eye, or somewhere, to decide what the color is. It is easy to demonstrate that all of the rules of color mixing would be a consequence of this proposition. There has been considerable debate about the thing because the next problem, of course, is to find the absorption characteristics of each of the three pigments. It turns out, unfortunately, that because we can transform the color coordinates in any manner we want to, we can only find all kinds of linear combinations of absorption curves by the color-mixing experiments, but not the curves for the individual pigments. People have tried in various ways to obtain a specific curve which does describe some particular physical property of the eye. One such curve is called a brightness curve, demonstrated in Fig. 35–3. In this figure are two curves, one for eyes in the dark, the other for eyes in the light; the latter is the cone brightness curve. This is measured by finding what is the smallest amount of colored light we need in order to be able to just see it. This measures how sensitive the eye is in different spectral regions. There is another very interesting way to measure this. If we take two colors and make them appear in an area, by flickering back and forth from one to the other, we see a flicker if the frequency is too low. However, as the frequency increases, the flicker will ultimately disappear at a certain frequency that depends on the brightness of the light, let us say at $16$ repetitions per second. Now if we adjust the brightness or the intensity of one color against the other, there comes an intensity where the flicker at 16 cycles disappears. To get flicker with the brightness so adjusted, we have to go to a much lower frequency in order to see a flicker of the color. So, we get what we call a flicker of the brightness at a higher frequency and, at a lower frequency, a flicker of the color. It is possible to match two colors for “equal brightness” by this flicker technique. The results are almost, but not exactly, the same as those obtained by measuring the threshold sensitivity of the eye for seeing weak light by the cones. Most workers use the flicker system as a definition of the brightness curve. Now, if there are three color-sensitive pigments in the eye, the problem is to determine the shape of the absorption spectrum of each one. How? We know there are people who are color blind—eight percent of the male population, and one-half of one percent of the female population. Most of the people who are color blind or abnormal in color vision have a different degree of sensitivity than others to a variation of color, but they still need three colors to match. However, there are some who are called dichromats, for whom any color can be matched using only two primary colors. The obvious suggestion, then, is to say that they are missing one of the three pigments. If we can find three kinds of color-blind dichromats who have different color-mixing rules, one kind should be missing the red, another the green, and another the blue pigmentation. By measuring all these types we can determine the three curves! It turns out that there are three types of dichromatic color blindness; there are two common types and a third very rare type, and from these three it has been possible to deduce the pigment absorption spectra. Figure 35–6 shows the color mixing of a particular type of color-blind person called a deuteranope. For him, the loci of constant colors are not points, but certain lines, along each of which the color appears to him to be the same. If the theory that he is missing one of the three pieces of information is right, all these lines should intersect at a point. If we carefully measure on this graph, they do intersect perfectly. Obviously, therefore, this has been made by a mathematician and does not represent real data! As a matter of fact, if we look at the latest paper with real data, it turns out that in the graph of Fig. 35–6, the point of focus of all the lines is not exactly at the right place. Using the lines in the above figure, we cannot find reasonable spectra; we need negative and positive absorptions in different regions. But using the new data of Yustova, it turns out that each of the absorption curves is everywhere positive. Figure 35–7 shows a different kind of color blindness, that of the protanope, which has a focus near the red end of the boundary curve. Yustova gets approximately the same position in this case. Using the three different kinds of color blindness, the three pigment response curves have finally been determined, and are shown in Fig. 35–8. Finally? Perhaps. There is a question as to whether the three-pigment idea is right, whether color blindness results from lack of one pigment, and even whether the color-mix data on color blindness are right. Different workers get different results. This field is still very much under development.
1
35
Color Vision
6
Physiochemistry of color vision
Now, what about checking these curves against actual pigments in the eye? The pigments that can be obtained from a retina consist mainly of a pigment called visual purple. The most remarkable features of this are, first, that it is in the eye of almost every vertebrate animal, and second, that its response curve fits beautifully with the sensitivity of the eye, as seen in Fig. 35–9, in which are plotted on the same scale the absorption of visual purple and the sensitivity of the dark-adapted eye. This pigment is evidently the pigment that we see with in the dark: visual purple is the pigment for the rods, and it has nothing to do with color vision. This fact was discovered in 1877. Even today it can be said that the color pigments of the cones have never been obtained in a test tube. In 1958 it could be said that the color pigments had never been seen at all. But since that time, two of them have been detected by Rushton by a very simple and beautiful technique. The trouble is, presumably, that since the eye is so weakly sensitive to bright light compared with light of low intensity, it needs a lot of visual purple to see with, but not much of the color pigments for seeing colors. Rushton’s idea is to leave the pigment in the eye, and measure it anyway. What he does is this. There is an instrument called an ophthalmoscope for sending light into the eye through the lens and then focusing the light that comes back out. With it one can measure how much is reflected. So one measures the reflection coefficient of light which has gone twice through the pigment (reflected by a back layer in the eyeball, and coming out through the pigment of the cone again). Nature is not always so beautifully designed. The cones are interestingly designed so that the light that comes into the cone bounces around and works its way down into the little sensitive points at the apex. The light goes right down into the sensitive point, bounces at the bottom and comes back out again, having traversed a considerable amount of the color-vision pigment; also, by looking at the fovea, where there are no rods, one is not confused by visual purple. But the color of the retina has been seen a long time ago: it is a sort of orangey pink; then there are all the blood vessels, and the color of the material at the back, and so on. How do we know when we are looking at the pigment? Answer: First we take a color-blind person, who has fewer pigments and for whom it is therefore easier to make the analysis. Second, the various pigments, like visual purple, have an intensity change when they are bleached by light; when we shine light on them they change their concentration. So, while looking at the absorption spectrum of the eye, Rushton put another beam in the whole eye, which changes the concentration of the pigment, and he measured the change in the spectrum, and the difference, of course, has nothing to do with the amount of blood or the color of the reflecting layers, and so on, but only the pigment, and in this manner Rushton obtained a curve for the pigment of the protanope eye, which is given in Fig. 35–10. The second curve in Fig. 35–10 is a curve obtained with a normal eye. This was obtained by taking a normal eye and, having already determined what one pigment was, bleaching the other one in the red where the first one is insensitive. Red light has no effect on the protanope eye, but does in the normal eye, and thus one can obtain the curve for the missing pigment. The shape of one curve fits beautifully with Yustova’s green curve, but the red curve is a little bit displaced. So perhaps we are getting on the right track. Or perhaps not—the latest work with deuteranopes does not show any definite pigment missing. Color is not a question of the physics of the light itself. Color is a sensation, and the sensation for different colors is different in different circumstances. For instance, if we have a pink light, made by superimposing crossing beams of white light and red light (all we can make with white and red is pink, obviously), we may show that white light may appear blue. If we place an object in the beams, it casts two shadows—one illuminated by the white light alone and the other by the red. For most people the “white” shadow of an object looks blue, but if we keep expanding this shadow until it covers the entire screen, we see that it suddenly appears white, not blue! We can get other effects of the same nature by mixing red, yellow, and white light. Red, yellow, and white light can produce only orangey yellows, and so on. So if we mix such lights roughly equally, we get only orange light. Nevertheless, by casting different kinds of shadows in the light, with various overlaps of colors, one gets quite a series of beautiful colors which are not in the light themselves (that is only orange), but in our sensations. We clearly see many different colors that are quite unlike the “physical” ones in the beam. It is very important to appreciate that a retina is already “thinking” about the light; it is comparing what it sees in one region with what it sees in another, although not consciously. What we know of how it does that is the subject of the next chapter. bibliography Committee on Colorimetry, Optical Society of America, The Science of Color, Thomas Y. Crowell Company, New York, 1953. Hecht, S., S. Shlaer, and M. H. Pirenne, “Energy, Quanta, and Vision,” Journal of General Physiology, 1942, 25, 819-840. Morgan, Clifford, and Eliot Stellar, Physiological Psychology, 2nd ed., McGraw-Hill Book Company, Inc., 1950. Nuberg, N. D. and E. N. Yustova, “Researches on Dichromatic Vision and the Spectral Sensitivity of the Receptors of Trichromats,” presented at Symposium No. 8, Visual Problems of Colour, Vol. II, National Physical Laboratory, Teddington, England, September 1957. Published by Her Majesty’s Stationery Office, London, 1958. Rushton, W. A., “The Cone Pigments of the Human Fovea in Colour Blind and Normal,” presented at Symposium No. 8, Visual Problems of Colour, Vol. I, National Physical Laboratory, Teddington, England, September 1957. Published by Her Majesty’s Stationery Office, London, 1958. Woodworth, Robert S., Experimental Psychology, Henry Holt and Company, New York, 1938. Revised edition, 1954, by Robert S. Woodworth and H. Schlosberg.
1
36
Mechanisms of Seeing
1
The sensation of color
In discussing the sense of sight, we have to realize that (outside of a gallery of modern art!) one does not see random spots of color or spots of light. When we look at an object we see a man or a thing; in other words, the brain interprets what we see. How it does that, no one knows, and it does it, of course, at a very high level. Although we evidently do learn to recognize what a man looks like after much experience, there are a number of features of vision which are more elementary but which also involve combining information from different parts of what we see. To help us understand how we make an interpretation of an entire image, it is worthwhile to study the earliest stages of the putting together of information from the different retinal cells. In the present chapter we shall concentrate mainly on that aspect of vision, although we shall also mention a number of side issues as we go along. An example of the fact that we have an accumulation, at a very elementary level, of information from several parts of the eye at the same time, beyond our voluntary control or ability to learn, was that blue shadow which was produced by white light when both white and red were shining on the same screen. This effect at least involves the knowledge that the background of the screen is pink, even though, when we are looking at the blue shadow, it is only “white” light coming into a particular spot in the eye; somewhere, pieces of information have been put together. The more complete and familiar the context is, the more the eye will make corrections for peculiarities. In fact, Land has shown that if we mix that apparent blue and the red in various proportions, by using two photographic transparencies with absorption in front of the red and the white in different proportions, it can be made to represent a real scene, with real objects, rather faithfully. In this case we get a lot of intermediate apparent colors too, analogous to what we would get by mixing red and blue-green; it seems to be an almost complete set of colors, but if we look very hard at them, they are not so very good. Even so, it is surprising how much can be obtained from just red and white. The more the scene looks like a real situation, the more one is able to compensate for the fact that all the light is actually nothing but pink! Another example is the appearance of “colors” in a black-and-white rotating disc, whose black and white areas are as shown in Fig. 36–1. When the disc is rotated, the variations of light and dark at any one radius are exactly the same; it is only the background that is different for the two kinds of “stripes.” Yet one of the “rings” appears colored with one color and the other with another.1 No one yet understands the reason for those colors, but it is clear that information is being put together at a very elementary level, in the eye itself, most likely. Almost all present-day theories of color vision agree that the color-mixing data indicate that there are only three pigments in the cones of the eye, and that it is the spectral absorption in those three pigments that fundamentally produces the color sense. But the total sensation that is associated with the absorption characteristics of the three pigments acting together is not necessarily the sum of the individual sensations. We all agree that yellow simply does not seem to be reddish green; in fact it might be a tremendous surprise to most people to discover that light is, in fact, a mixture of colors, because presumably the sensation of light is due to some other process than a simple mixture like a chord in music, where the three notes are there at the same time and if we listen hard we can hear them individually. We cannot look hard and see the red and the green. The earliest theories of vision said that there are three pigments and three kinds of cones, each kind containing one pigment; that a nerve runs from each cone to the brain, so that the three pieces of information are carried to the brain; and then in the brain, anything can happen. This, of course, is an incomplete idea: it does no good to discover that the information is carried along the optic nerve to the brain, because we have not even started to solve the problem. We must ask more basic questions: Does it make any difference where the information is put together? Is it important that it be carried right up into the brain in the optic nerve, or could the retina do some analysis first? We have seen a picture of the retina as an extremely complicated thing with lots of interconnections (Fig. 35–2) and it might make some analyses. As a matter of fact, people who study anatomy and the development of the eye have shown that the retina is, in fact, the brain: in the development of the embryo, a piece of the brain comes out in front, and long fibers grow back, connecting the eyes to the brain. The retina is organized in just the way the brain is organized and, as someone has beautifully put it, “The brain has developed a way to look out upon the world.” The eye is a piece of brain that is touching light, so to speak, on the outside. So it is not at all unlikely that some analysis of the color has already been made in the retina. This gives us a very interesting opportunity. None of the other senses involves such a large amount of calculation, so to speak, before the signal gets into a nerve that one can make measurements on. The calculations for all the rest of the senses usually happen in the brain itself, where it is very difficult to get at specific places to make measurements, because there are so many interconnections. Here, with the visual sense, we have the light, three layers of cells making calculations, and the results of the calculations being transmitted through the optic nerve. So we have the first chance to observe physiologically how, perhaps, the first layers of the brain work in their first steps. It is thus of double interest, not simply interesting for vision, but interesting to the whole problem of physiology. The fact that there are three pigments does not mean that there must be three kinds of sensations. One of the other theories of color vision has it that there are really opposing color schemes (Fig. 36–2). That is, one of the nerve fibers carries a lot of impulses if there is yellow being seen, and less than usual for blue. Another nerve fiber carries green and red information in the same way, and another, white and black. In other words, in this theory someone has already started to make a guess as to the system of wiring, the method of calculation. The problems we are trying to solve by guessing at these first calculations are questions about the apparent colors that are seen on a pink background, what happens when the eye is adapted to different colors, and also the so-called psychological phenomena. The psychological phenomena are of the nature, for instance, that white does not “feel” like red and yellow and blue, and this theory was advanced because the psychologists say that there are four apparent pure colors: “There are four stimuli which have a remarkable capacity to evoke psychologically simple blue, yellow, green, and red hues respectively. Unlike sienna, magenta, purple, or most of the discriminable colors, these simple hues are unmixed in the sense that none partakes of the nature of the other; specifically, blue is not yellowish, reddish, or greenish, and so on; these are psychologically primary hues.” That is a psychological fact, so-called. To find out from what evidence this psychological fact was deduced, we must search very hard indeed through all the literature: In the modern literature all we find on the subject are repeats of the same statement, or of one by a German psychologist, who uses as one of his authorities Leonardo da Vinci, who, of course, we all know was a great artist. He says, “Leonardo thought there were five colors.” Then, looking still further, we find, in a still older book, the evidence for the subject. The book says something like this: “Purple is reddish-blue, orange is reddish-yellow, but can red be seen as purplish-orange? Are not red and yellow more unitary than purple or orange? The average person, asked to state which colors are unitary, names red, yellow, and blue, these three, and some observers add a fourth, green. Psychologists are accustomed to accept the four as salient hues.” So that is the situation in the psychological analysis of this matter: if everybody says there are three, and somebody says there are four, and they want it to be four, it will be four. That shows the difficulty with psychological researches. It is clear that we have such feelings, but it is very difficult to obtain much information about them. So the other direction to go is the physiological direction, to find out experimentally what actually happens in the brain, the eye, the retina, or wherever, and perhaps to discover that some combinations of impulses from various cells move along certain nerve fibers. Incidentally, primary pigments do not have to be in separate cells; one could have cells in which are mixtures of the various pigments, cells with the red and the green pigments, cells with all three (the information of all three is then white information), and so on. There are many ways of hooking the system up, and we have to find out which way nature has used. It would be hoped, ultimately, that when we understand the physiological connections we will have a little bit of understanding of some of those aspects of the psychology, so we look in that direction.
1
36
Mechanisms of Seeing
2
The physiology of the eye
We begin by talking not only about color vision, but about vision in general, just to remind ourselves about the interconnections in the retina, shown in Fig. 35–2. The retina is really like the surface of the brain. Although the actual picture through a microscope is a little more complicated looking than this somewhat schematized drawing, by careful analysis one can see all these interconnections. There is no question that one part of the surface of the retina is connected to other parts, and that the information that comes out on the long axons, which produce the optic nerve, are combinations of information from many cells. There are three layers of cells in the succession of function: there are retinal cells, which are the ones that the light affects, an intermediate cell which takes information from a single or a few retinal cells and gives it out again to several cells in a third layer of cells and carries it to the brain. There are all kinds of cross connections between cells in the layers. We now turn to some aspects of the structure and performance of the eye (see Fig. 35–1). The focusing of the light is accomplished mainly by the cornea, by the fact that it has a curved surface which “bends” the light. This is why we cannot see clearly under water, because we then do not have enough difference between the index of the cornea, which is $1.37$, and that of the water, which is $1.33$. Behind the cornea is water, practically, with an index of $1.33$, and behind that is a lens which has a very interesting structure: it is a series of layers, like an onion, except that it is all transparent, and it has an index of $1.40$ in the middle and $1.38$ at the outside. (It would be nice if we could make optical glass in which we could adjust the index throughout, for then we would not have to curve it as much as we do when we have a uniform index.) Furthermore, the shape of the cornea is not that of a sphere. A spherical lens has a certain amount of spherical aberration. The cornea is “flatter” at the outside than is a sphere, in just such a manner that the spherical aberration is less for the cornea than it would be if we put a spherical lens in there! The light is focused by the cornea-lens system onto the retina. As we look at things that are closer and farther away, the lens tightens and loosens and changes the focus to adjust for the different distances. To adjust for the total amount of light there is the iris, which is what we call the color of the eye, brown or blue, depending on who it is; as the amount of light increases and decreases, the iris moves in and out. Let us now look at the neural machinery for controlling the accommodation of the lens, the motion of the eye, the muscles which turn the eye in the socket, and the iris, shown schematically in Fig. 36–3. Of all the information that comes out of the optic nerve $A$, the great majority is divided into one of two bundles (which we will talk about later) and thence to the brain. But there are a few fibers, of interest to us now, which do not run directly to the visual cortex of the brain where we “see” the images, but instead go into the mid-brain $H$. These are the fibers which measure the average light and make adjustment for the iris; or, if the image looks foggy, they try to correct the lens; or, if there is a double image, they try to adjust the eye for binocular vision. At any rate, they go through the mid-brain and feed back into the eye. At $K$ are the muscles which run the accommodation of the lens, and at $L$ another one that runs into the iris. The iris has two muscle systems. One is a circular muscle $L$ which, when it is excited, pulls in and closes down the iris; it acts very rapidly and the nerves are directly connected from the brain through short axons into the iris. The opposite muscles are radial muscles, so that, when the things get dark and the circular muscle relaxes, these radial muscles pull out. Here we have, as in many places in the body, a pair of muscles which work in opposite directions, and in almost every such case the nerve systems which control the two are very delicately adjusted, so that when signals are sent in to tighten one, signals are automatically sent in to loosen the other. The iris is a peculiar exception: the nerves which make the iris contract are the ones we have already described, but the nerves which make the iris expand come out from no one knows exactly where, go down into the spinal cord back of the chest, into the thoracic sections, out of the spinal cord, up through the neck ganglia, and all the way around and back up into the head in order to run the other end of the iris. In fact, the signal goes through a completely different nervous system, not the central nervous system at all, but the sympathetic nervous system, so it is a very strange way of making things go. We have already emphasized another strange thing about the eye, that the light-sensitive cells are on the wrong side, so that the light has to go through several layers of other cells before it gets to the receptors—it is built inside out! So some of the features are wonderful and some are apparently stupid. Figure 36–4 shows the connections of the eye to the part of the brain which is most directly concerned with the visual process. The optic nerve fibers run into a certain area just beyond $D$, called the lateral geniculate, whereupon they run out to a section of the brain called the visual cortex. Notice that some of the fibers from each eye are sent over to the other side of the brain, so the picture formed is incomplete. The optic nerves from the left side of the right eye run across the optic chiasma $B$, while the ones on the left side of the left eye come around and go this same way. So the left side of the brain receives all the information which comes from the left side of the eyeball of each eye, i.e., on the right side of the visual field, while the right side of the brain sees the left side of the visual field. This is the manner in which the information from each of the two eyes is put together in order to tell how far away things are. This is the system of binocular vision. The connections between the retina and the visual cortex are interesting. If a spot in the retina is excised or destroyed in any way, then the whole fiber will die, and we can thereby find out where it is connected. It turns out that, essentially, the connections are one to one—for each spot in the retina there is one spot in the visual cortex—and spots that are very close together in the retina are very close together in the visual cortex. So the visual cortex still represents the spatial arrangement of the rods and cones, but of course much distorted. Things which are in the center of the field, which occupy a very small part of the retina, are expanded over many, many cells in the visual cortex. It is clear that it is useful to have things which are originally close together, still close together. The most remarkable aspect of the matter, however, is the following. The place where one would think it would be most important to have things close together would be right in the middle of the visual field. Believe it or not, the up-and-down line in our visual field as we look at something is of such a nature that the information from all the points on the right side of that line is going into the left side of the brain, and information from the points on the left side is going into the right side of the brain, and the way this area is made, there is a cut right down through the middle, so that the things that are very close together right in the middle are very far apart in the brain! Somehow, the information has to go from one side of the brain to the other through some other channels, which is quite surprising. The question of how this network ever gets “wired” together is very interesting. The problem of how much is already wired and how much is learned is an old one. It used to be thought long ago that perhaps it does not have to be wired carefully at all, it is only just roughly interconnected, and then, by experience, the young child learns that when a thing is “up there” it produces some sensation in the brain. (Doctors always tell us what the young child “feels,” but how do they know what a child feels at the age of one?) The child, at the age of one, supposedly sees that an object is “up there,” gets a certain sensation, and learns to reach “there,” because when he reaches “here,” it does not work. That approach probably is not correct, because we already see that in many cases there are these special detailed interconnections. More illuminating are some most remarkable experiments done with a salamander. (Incidentally, with the salamander there is a direct crossover connection, without the optic chiasma, because the eyes are on each side of the head and have no common area. Salamanders do not have binocular vision.) The experiment is this. We can cut the optic nerve in a salamander and the nerve will grow out from the eyes again. Thousands and thousands of cell fibers will thus re-establish themselves. Now, in the optic nerve the fibers do not stay adjacent to each other—it is like a great, sloppily made telephone cable, all the fibers twisting and turning, but when it gets to the brain they are all sorted out again. When we cut the optic nerve of the salamander, the interesting question is, will it ever get straightened out? The answer is remarkable: yes. If we cut the optic nerve of the salamander and it grows back, the salamander has good visual acuity again. However, if we cut the optic nerve and turn the eye upside down and let it grow back again, it has good visual acuity all right, but it has a terrible error: when the salamander sees a fly “up here,” it jumps at it “down there,” and it never learns. Therefore there is some mysterious way by which the thousands and thousands of fibers find their right places in the brain. This problem of how much is wired in, and how much is not, is an important problem in the theory of the development of creatures. The answer is not known, but is being studied intensively. The same experiment in the case of a goldfish shows that there is a terrible knot, like a great scar or complication, in the optic nerve where we cut it, but in spite of all this the fibers grow back to their right places in the brain. In order to do this, as they grow into the old channels of the optic nerve they must make several decisions about the direction in which they should grow. How do they do this? There seem to be chemical clues that different fibers respond to differently. Think of the enormous number of growing fibers, each of which is an individual differing in some way from its neighbors; in responding to whatever the chemical clues are, it responds in a unique enough way to find its proper place for ultimate connection in the brain! This is an interesting—a fantastic—thing. It is one of the great recently discovered phenomena of biology and is undoubtedly connected to many older unsolved problems of growth, organization, and development of organisms, and particularly of embryos. One other interesting phenomenon has to do with the motion of the eye. The eyes must be moved in order to make the two images coincide in different circumstances. These motions are of different kinds: one is to follow something, which requires that both eyes must go in the same direction, right or left, and the other is to point them toward the same place at various distances away, which requires that they must move oppositely. The nerves going into the muscles of the eye are already wired up for just such purposes. There is one set of nerves which will pull the muscles on the inside of one eye and the outside of the other, and relax the opposite muscles, so that the two eyes move together. There is another center where an excitation will cause the eyes to move in toward each other from parallel. Either eye can be turned out to the corner if the other eye moves toward the nose, but it is impossible consciously or unconsciously to turn both eyes out at the same time, not because there are no muscles, but because there is no way to send a signal to turn both eyes out, unless we have had an accident or there is something the matter, for instance if a nerve has been cut. Although the muscles of one eye can certainly steer that eye about, not even a Yogi is able to move both eyes out freely under voluntary control, because there does not seem to be any way to do it. We are already wired to a certain extent. This is an important point, because most of the earlier books on anatomy and psychology, and so on, do not appreciate or do not emphasize the fact that we are so completely wired already—they say that everything is just learned.
1
36
Mechanisms of Seeing
3
The rod cells
Let us now examine in more detail what happens in the rod cells. Figure 36–5 shows an electron micrograph of the middle of a rod cell (the rod cell keeps going up out of the field). There are layer after layer of plane structures, shown magnified at the right, which contain the substance rhodopsin (visual purple), the dye, or pigment, which produces the effects of vision in the rods. The rhodopsin, which is the pigment, is a big protein which contains a special group called retinene, which can be taken off the protein, and which is, undoubtedly, the main cause of the absorption of light. We do not understand the reason for the planes, but it is very likely that there is some reason for holding all the rhodopsin molecules parallel. The chemistry of the thing has been worked out to a large extent, but there might be some physics to it. It may be that all of the molecules are arranged in some kind of a row so that when one is excited an electron which is generated, say, may run all the way down to some place at the end to get the signal out, or something. This subject is very important, and has not been worked out. It is a field in which both biochemistry and solid state physics, or something like it, will ultimately be used. This kind of a structure, with layers, appears in other circumstances where light is important, for example in the chloroplast in plants, where the light causes photosynthesis. If we magnify those, we find the same thing with almost the same kind of layers, but there we have chlorophyll, of course, instead of retinene. The chemical form of retinene is shown in Fig. 36–6. It has a series of alternate double bonds along the side chain, which is characteristic of nearly all strongly absorbing organic substances, like chlorophyll, blood, and so on. This substance is impossible for human beings to manufacture in their own cells—we have to eat it. So we eat it in the form of a special substance, which is exactly the same as retinene except that there is a hydrogen tied on the right end; it is called vitamin A, and if we do not eat enough of it, we do not get a supply of retinene, and the eye becomes what we call night blind, because there is then not enough pigment in the rhodopsin to see with the rods at night. The reason why such a series of double bonds absorbs light very strongly is also known. We may just give a hint: The alternating series of double bonds is called a conjugated double bond; a double bond means that there is an extra electron there, and this extra electron is easily shifted to the right or left. When light strikes this molecule, the electron of each double bond is shifted over by one step. All the electrons in the whole chain shift, like a string of dominoes falling over, and though each one moves only a little distance (we would expect that, in a single atom, we could move the electron only a little distance), the net effect is the same as though the one at the end was moved over to the other end! It is the same as though one electron went the whole distance back and forth, and so, in this manner, we get a much stronger absorption under the influence of the electric field, than if we could only move the electron a distance which is associated with one atom. So, since it is easy to move the electrons back and forth, retinene absorbs light very strongly; that is the machinery of the physical-chemical end of it.
1
36
Mechanisms of Seeing
4
The compound (insect) eye
Let us now return to biology. The human eye is not the only kind of eye. In the vertebrates, almost all eyes are essentially like the human eye. However, in the lower animals there are many other kinds of eyes: eye spots, various eye cups, and other less sensitive things, which we have no time to discuss. But there is one other highly developed eye among the invertebrates, the compound eye of the insect. (Most insects having large compound eyes also have various additional simpler eyes as well.) A bee is an insect whose vision has been studied very carefully. It is easy to study the properties of the vision of bees because they are attracted to honey, and we can make experiments in which we identify the honey by putting it on blue paper or red paper, and see which one they come to. By this method some very interesting things have been discovered about the vision of the bee. In the first place, in trying to measure how acutely bees could see the color difference between two pieces of “white” paper, some researchers found they were not very good, and others found they were fantastically good. Even if the two pieces of white paper were almost exactly the same, the bees could still tell the difference. The experimenters used zinc white for one piece of paper and lead white for the other, and although these look exactly the same to us, the bee could easily distinguish them, because they reflect a different amount in the ultraviolet. In this way it was discovered that the bee’s eye is sensitive over a wider range of the spectrum than is our own. Our eye works from $7000$ angstroms to $4000$ angstroms, from red to violet, but the bee’s can see down to $3000$ angstroms into the ultraviolet! This makes for a number of different interesting effects. In the first place, bees can distinguish between many flowers which to us look alike. Of course, we must realize that the colors of flowers are not designed for our eyes, but for the bee; they are signals to attract the bees to a specific flower. We all know that there are many “white” flowers. Apparently white is not very interesting to the bees, because it turns out that all of the white flowers have different proportions of reflection in the ultraviolet; they do not reflect one hundred percent of the ultraviolet as would a true white. All the light is not coming back, the ultraviolet is missing, and that is a color, just as, for us, if the blue is missing, it comes out yellow. So, all the flowers are colored for the bees. However, we also know that red cannot be seen by bees. Thus we might expect that all red flowers should look black to the bee. Not so! A careful study of red flowers shows, first, that even with our own eye we can see that a great majority of red flowers have a bluish tinge because they are mainly reflecting an additional amount in the blue, which is the part that the bee sees. Furthermore, experiments also show that flowers vary in their reflection of the ultraviolet over different parts of the petals, and so on. So if we could see the flowers as bees see them they would be even more beautiful and varied! It has been shown, however, that there are a few red flowers which do not reflect in the blue or in the ultraviolet, and would, therefore, appear black to the bee! This was of quite some concern to the people who worry about this matter, because black does not seem like an interesting color, since it is hard to tell from a dirty old shadow. It actually turned out that these flowers were not visited by bees, these are the flowers that are visited by hummingbirds, and hummingbirds can see the red! Another interesting aspect of the vision of the bee is that bees can apparently tell the direction of the sun by looking at a patch of blue sky, without seeing the sun itself. We cannot easily do this. If we look out the window at the sky and see that it is blue, in which direction is the sun? The bee can tell, because the bee is quite sensitive to the polarization of light, and the scattered light of the sky is polarized.2 There is still some debate about how this sensitivity operates. Whether it is because the reflections of the light are different in different circumstances, or the bee’s eye is directly sensitive, is not yet known.3 It is also said that the bee can notice flicker up to $200$ oscillations per second, while we see it only up to $20$. The motions of bees in the hives are very quick; the feet move and the wings vibrate, but it is very hard for us to see these motions with our eye. However, if we could see more rapidly we would be able to see the motion. It is probably very important to the bee that its eye has such a rapid response. Now let us discuss the visual acuity we could expect from the bee. The eye of a bee is a compound eye, and it is made of a large number of special cells called ommatidia, which are arranged conically on the surface of a sphere (roughly) on the outside of the bee’s head. Figure 36–7 shows a picture of one such ommatidium. At the top there is a transparent area, a kind of “lens,” but actually it is more like a filter or light pipe to make the light come down along the narrow fiber, which is where the absorption presumably occurs. Out of the other end of it comes the nerve fiber. The central fiber is surrounded on its sides by six cells which, in fact, have secreted the fiber. That is enough description for our purposes; the point is that it is a conical thing and many can fit next to each other all over the surface of the eye of the bee. Now let us discuss the resolution of the eye of the bee. If we draw lines (Fig. 36–8) to represent the ommatidia on the surface, which we suppose is a sphere of radius $r$, we may actually calculate how wide each ommatidium is by using our brains, and assuming that evolution is as clever as we are! If we have a very large ommatidium we do not have much resolution. That is, one cell gets a piece of information from one direction, and the adjacent cell gets a piece of information from another direction, and so on, and the bee cannot see things in between very well. So the uncertainty of visual acuity in the eye will surely correspond to an angle, the angle of the end of the ommatidium relative to the center of curvature of the eye. (The eye cells, of course, exist only at the surface of the sphere; inside that is the head of the bee.) This angle, from one ommatidium to the next, is, of course, the diameter of the ommatidia divided by the radius of the eye surface: \begin{equation} \label{Eq:I:36:1} \Delta\theta_g=\delta/r. \end{equation} So, we may say, “The finer we make the $\delta$, the more the visual acuity. So why doesn’t the bee just use very, very fine ommatidia?” Answer: We know enough physics to realize that if we are trying to get light down into a narrow slot, we cannot see accurately in a given direction because of the diffraction effect. The light that comes from several directions can enter and, due to diffraction, we will get light coming in at angle $\Delta\theta_d$ such that \begin{equation} \label{Eq:I:36:2} \Delta\theta_d=\lambda/\delta. \end{equation} Now we see that if we make the $\delta$ too small, then each ommatidium does not look in only one direction, because of diffraction! If we make them too big, each one sees in a definite direction, but there are not enough of them to get a good view of the scene. So we adjust the distance $\delta$ in order to make minimal the total effect of these two. If we add the two together, and find the place where the sum has a minimum (Fig. 36–9), we find that \begin{equation} \label{Eq:I:36:3} \ddt{(\Delta\theta_g+\Delta\theta_d)}{\delta}=0 =\frac{1}{r}-\frac{\lambda}{\delta^2}, \end{equation} which gives us a distance \begin{equation} \label{Eq:I:36:4} \delta=\sqrt{\lambda r}. \end{equation} If we guess that $r$ is about $3$ millimeters, take the light that the bee sees as $4000$ angstroms, and put the two together and take the square root, we find \begin{align} \delta &=(3\times10^{-3}\times4\times10^{-7})^{1/2}\text{ m}\notag\\[1ex] \label{Eq:I:36:5} &=3.5\times10^{-5}\text{ m}=35\text{ $\mu$m}. \end{align} The book says the diameter is $30$ $\mu$m, so that is rather good agreement! So, apparently, it really works, and we can understand what determines the size of the bee’s eye! It is also easy to put the above number back in and find out how good the bee’s eye actually is in angular resolution; it is very poor relative to our own. We can see things that are thirty times smaller in apparent size than the bee; the bee has a rather fuzzy out-of-focus image relative to what we can see. Nevertheless it is all right, and it is the best they can do. We might ask why the bees do not develop a good eye like our own, with a lens and so on. There are several interesting reasons. In the first place, the bee is too small; if it had an eye like ours, but on his scale, the opening would be about $30$ $\mu$m in size and diffraction would be so important that it would not be able to see very well anyway. The eye is not good if it is too small. Secondly, if it were as big as the bee’s head, then the eye would occupy the whole head of the bee. The beauty of the compound eye is that it takes up no space, it is just a very thin layer on the surface of the bee. So when we argue that they should have done it our way, we must remember that they had their own problems!
1
36
Mechanisms of Seeing
5
Other eyes
Besides the bees, many other animals can see color. Fish, butterflies, birds, and reptiles can see color, but it is believed that most mammals cannot. The primates can see color. The birds certainly see color, and that accounts for the colors of birds. There would be no point in having such brilliantly colored males if the females could not notice it! That is, the evolution of the sexual “whatever it is” that the birds have is a result of the female being able to see color. So next time we look at a peacock and think of what a brilliant display of gorgeous color it is, and how delicate all the colors are, and what a wonderful aesthetic sense it takes to appreciate all that, we should not compliment the peacock, but should compliment the visual acuity and aesthetic sense of the peahen, because that is what has generated the beautiful scene! All invertebrates have poorly developed eyes or compound eyes, but all the vertebrates have eyes very similar to our own, with one exception. If we consider the highest form of animal, we usually say, “Here we are!,” but if we take a less prejudiced point of view and restrict ourselves to the invertebrates, so that we cannot include ourselves, and ask what is the highest invertebrate animal, most zoologists agree that the octopus is the highest animal! It is very interesting that, besides the development of its brain and its reactions and so on, which are rather good for an invertebrate, it has also developed, independently, a different eye. It is not a compound eye or an eye spot—it has a cornea, it has lids, it has an iris, it has a lens, it has two regions of water, it has a retina behind. It is essentially the same as the eye of the vertebrates! It is a remarkable example of a coincidence in evolution where nature has twice discovered the same solution to a problem, with one slight improvement. In the octopus it also turns out, amazingly, that the retina is a piece of the brain that has come out in the same way in its embryonic development as is true for vertebrates, but the interesting thing which is different is that the cells which are sensitive to light are on the inside, and the cells which do the calculation are in back of them, rather than “inside out,” as in our eye. So we see, at least, that there is no good reason for its being inside out. The other time nature tried it, she got it straightened out! (See Fig. 36–10.) The biggest eyes in the world are those of the giant squid; they have been found up to $15$ inches in diameter!
1
36
Mechanisms of Seeing
6
Neurology of vision
One of the main points of our subject is the interconnection of information from one part of the eye to the other. Let us consider the compound eye of the horseshoe crab, on which considerable experimentation has been done. First of all, we must appreciate what kind of information can come along nerves. A nerve carries a kind of disturbance which has an electrical effect that is easy to detect, a kind of wavelike disturbance which runs down the nerve and produces an effect at the other end: a long piece of the nerve cell, called the axon, carries the information along, and a certain kind of impulse, called a “spike,” goes along if it is excited at one end. When one spike goes down the nerve, another cannot immediately follow. All the spikes are of the same size, so it is not that we get higher spikes when the thing is more strongly excited, but that we get more spikes per second. The size of the spike is determined by the fiber. It is important to appreciate this in order to see what happens next. Figure 36–11(a) shows the compound eye of the horseshoe crab; it is not very much of an eye, it has only about a thousand ommatidia. Figure 36–11(b) is a cross section through the system; one can see the ommatidia, with the nerve fibers that run out of them and go into the brain. But note that even in a horseshoe crab there are little interconnections. They are much less elaborate than in the human eye, and it gives us a chance to study a simpler example. Let us now look at the experiments which have been done by putting fine electrodes into the optic nerve of the horseshoe crab, and shining light on only one of the ommatidia, which is easy to do with lenses. If we turn a light on at some instant $t_0$, and measure the electrical pulses that come out, we find that there is a slight delay and then a rapid series of discharges which gradually slow down to a uniform rate, as shown in Fig. 36–12(a). When the light goes out, the discharge stops. Now it is very interesting that if, while our amplifier is connected to this same nerve fiber, we shine light on a different ommatidium nothing happens; no signal. Now we do another experiment: we shine the light on the original ommatidium and get the same response, but if we now turn light on another one nearby as well, the pulses are interrupted briefly and then run at a much lower rate (Fig. 36–12b). The rate of one is inhibited by the impulses which are coming out of the other! In other words, each nerve fiber carries the information from one ommatidium, but the amount that it carries is inhibited by the signals from the others. So, for example, if the whole eye is more or less uniformly illuminated, the information coming from any one ommatidium will be relatively weak, because it is inhibited by so many. In fact the inhibition is additive—if we shine light on several nearby ommatidia the inhibition is very great. The inhibition is greater when the ommatidia are closer, and if the ommatidia are far enough away from one another, inhibition is practically zero. So it is additive and depends on the distance; here is a first example of information from different parts of the eye being combined in the eye itself. We can see, perhaps, if we think about it awhile, that this is a device to enhance contrast at the edges of objects, because if a part of the scene is light and a part is black, then the ommatidia in the lighted area give impulses that are inhibited by all the other light in the neighborhood, so it is relatively weak. On the other hand, an ommatidium at the boundary which is given a “white” impulse is also inhibited by others in the neighborhood, but there are not as many of them, since some are black; the net signal is therefore stronger. The result would be a curve, something like that of Fig. 36–13. The crab will see an enhancement of the contour. The fact that there is an enhancement of contours has long been known; in fact it is a remarkable thing that has been commented on by psychologists many times. In order to draw an object, we have only to draw its outline. How used we are to looking at pictures that have only the outline! What is the outline? The outline is only the edge difference between light and dark or one color and another. It is not something definite. It is not, believe it or not, that every object has a line around it! There is no such line. It is only in our own psychological makeup that there is a line; we are beginning to understand the reasons why the “line” is enough of a clue to get the whole thing. Presumably our own eye works in some similar manner—much more complicated, but similar. Finally, we shall briefly describe the more elaborate work, the beautiful, advanced work that has been done on the frog. Doing a corresponding experiment on a frog, by putting very fine, beautifully built needlelike probes into the optic nerve of a frog, one can obtain the signals that are going along one particular axon and, just as in the case of the horseshoe crab, we find that the information does not depend on just one spot in the eye, but is a sum of information over several spots. The most recent picture of the operation of the frog’s eye is the following. One can find four different kinds of optic nerve fibers, in the sense that there are four different kinds of responses. These experiments were not done by shining on-and-off impulses of light, because that is not what a frog sees. A frog just sits there and his eyes never move, unless the lily pad is flopping back and forth, and in that case his eyes wobble just right so that the image stays put. He does not turn his eyes. If anything moves in his field of vision, like a little bug (he has to be able to see something small moving in the fixed background), it turns out that there are four different kinds of fibers which discharge, whose properties are summarized in Table 36–1. Sustained edge detection, nonerasable, means that if we bring an object with an edge into the field of view of the frog, then there are a lot of impulses in this particular fiber while the object is moving, but they die down to a sustained impulse that continues as long as the edge is there, even if it is standing still. If we turn out the light, the impulses stop. If we turn it on again while the edge is still in view, they start again. They are not erasable. Another kind of fiber is very similar, except that if the edge is straight it does not work. It must be a convex edge with dark behind it! How complicated must be the system of interconnections in the retina of the eye of the frog in order for it to understand that a convex surface has moved in! Furthermore, although this fiber does sustain somewhat, it does not sustain as long as the other, and if we turn out the light and turn it on again it does not build up again. It depends on the moving in of the convex surface. The eye sees it move in and remembers that it is there, but if we merely turn out the light for a moment, it simply forgets it and no longer sees it. Another example is change-in-contrast detection. If there is an edge moving in or out there are pulses, but if the thing stands still there are no pulses at all. Then there is a dimming detector. If the light intensity is going down it creates pulses, but if it stays down or stays up, the impulse stops; it only works while the light is dimming. Then, finally, there are a few fibers which are dark detectors—a most amazing thing—they fire all the time! If we increase the light, they fire less rapidly, but all the time. If we decrease the light, they fire more rapidly, all the time. In the dark they fire like mad, perpetually saying, “It is dark! It is dark! It is dark!” Now these responses seem to be rather complicated to classify, and we might wonder whether perhaps the experiments are being misinterpreted. But it is very interesting that these same classes are very clearly separated in the anatomy of the frog! By other measurements, after these responses had been classified (afterwards, that is what is important about this), it was discovered that the speed of the signals on the different fibers was not the same, so here was another, independent way to check which kind of a fiber we have found! Another interesting question is from how big an area is one particular fiber making its calculations? The answer is different for the different classes. Figure 36–14 shows the surface of the so-called tectum of a frog, where the nerves come into the brain from the optic nerve. All the nerve fibers coming in from the optic nerve make connections in various layers of the tectum. This layered structure is analogous to the retina; that is partly why we know that the brain and retina are very similar. Now, by taking an electrode and moving it down in succession through the layers, we can find out which kinds of optic nerves end where, and the beautiful and wonderful result is that the different kinds of fibers end in different layers! The first ones end in number $1$ type, the second in number $2$, the threes and fives end in the same place, and deepest of all is number four. (What a coincidence, they got the numbers almost in the right order! No, that is why they numbered them that way, the first paper had the numbers in a different order!) We may briefly summarize what we have just learned this way: There are three pigments, presumably. There may be many different kinds of receptor cells containing the three pigments in different proportions, but there are many cross connections which may permit additions and subtractions through addition and reinforcement in the nervous system. So before we really understand color vision, we will have to understand the final sensation. This subject is still an open one, but these researches with microelectrodes and so on will perhaps ultimately give us more information on how we see color. bibliography Committee on Colorimetry, Optical Society of America, The Science of Color, Thomas Y. Crowell Company, New York, 1953. “Mechanisms of Vision,” 2nd Supplement to Journal of General Physiology, Vol. 43, No. 6, Part 2, July 1960, Rockefeller Institute Press.  specific articles: DeRobertis, E., “Some Observations on the Ultrastructure and Morphogenesis of Photoreceptors,” pp. 1–15. Hurvich, L. M. and D. Jameson, “Perceived Color, Induction Effects, and Opponent-Response Mechanisms,” pp. 63–80. Rosenblith, W. A., ed., Sensory Communication, Massachusetts Institute of Technology Press, Cambridge, Mass., 1961. ”Sight, Sense of,” Encyclopaedia Britannica, Vol. 20, 1957, pp. 628—635.
1
37
Quantum Behavior
1
Atomic mechanics
In the last few chapters we have treated the essential ideas necessary for an understanding of most of the important phenomena of light—or electromagnetic radiation in general. (We have left a few special topics for next year. Specifically, the theory of the index of dense materials and total internal reflection.) What we have dealt with is called the “classical theory” of electric waves, which turns out to be a completely adequate description of nature for a large number of effects. We have not had to worry yet about the fact that light energy comes in lumps or “photons.” We would like to take up as our next subject the problem of the behavior of relatively large pieces of matter—their mechanical and thermal properties, for instance. In discussing these, we will find that the “classical” (or older) theory fails almost immediately, because matter is really made up of atomic-sized particles. Still, we will deal only with the classical part, because that is the only part that we can understand using the classical mechanics we have been learning. But we shall not be very successful. We shall find that in the case of matter, unlike the case of light, we shall be in difficulty relatively soon. We could, of course, continuously skirt away from the atomic effects, but we shall instead interpose here a short excursion in which we will describe the basic ideas of the quantum properties of matter, i.e., the quantum ideas of atomic physics, so that you will have some feeling for what it is we are leaving out. For we will have to leave out some important subjects that we cannot avoid coming close to. So we will give now the introduction to the subject of quantum mechanics, but will not be able actually to get into the subject until much later. “Quantum mechanics” is the description of the behavior of matter in all its details and, in particular, of the happenings on an atomic scale. Things on a very small scale behave like nothing that you have any direct experience about. They do not behave like waves, they do not behave like particles, they do not behave like clouds, or billiard balls, or weights on springs, or like anything that you have ever seen. Newton thought that light was made up of particles, but then it was discovered, as we have seen here, that it behaves like a wave. Later, however (in the beginning of the twentieth century) it was found that light did indeed sometimes behave like a particle. Historically, the electron, for example, was thought to behave like a particle, and then it was found that in many respects it behaved like a wave. So it really behaves like neither. Now we have given up. We say: “It is like neither.” There is one lucky break, however—electrons behave just like light. The quantum behavior of atomic objects (electrons, protons, neutrons, photons, and so on) is the same for all, they are all “particle waves,” or whatever you want to call them. So what we learn about the properties of electrons (which we shall use for our examples) will apply also to all “particles,” including photons of light. The gradual accumulation of information about atomic and small-scale behavior during the first quarter of the 20th century, which gave some indications about how small things do behave, produced an increasing confusion which was finally resolved in 1926 and 1927 by Schrödinger, Heisenberg, and Born. They finally obtained a consistent description of the behavior of matter on a small scale. We take up the main features of that description in this chapter. Because atomic behavior is so unlike ordinary experience, it is very difficult to get used to and it appears peculiar and mysterious to everyone, both to the novice and to the experienced physicist. Even the experts do not understand it the way they would like to, and it is perfectly reasonable that they should not, because all of direct, human experience and of human intuition applies to large objects. We know how large objects will act, but things on a small scale just do not act that way. So we have to learn about them in a sort of abstract or imaginative fashion and not by connection with our direct experience. In this chapter we shall tackle immediately the basic element of the mysterious behavior in its most strange form. We choose to examine a phenomenon which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery. We cannot explain the mystery in the sense of “explaining” how it works. We will tell you how it works. In telling you how it works we will have told you about the basic peculiarities of all quantum mechanics.
1
37
Quantum Behavior
2
An experiment with bullets
To try to understand the quantum behavior of electrons, we shall compare and contrast their behavior, in a particular experimental setup, with the more familiar behavior of particles like bullets, and with the behavior of waves like water waves. We consider first the behavior of bullets in the experimental setup shown diagrammatically in Fig. 37–1. We have a machine gun that shoots a stream of bullets. It is not a very good gun, in that it sprays the bullets (randomly) over a fairly large angular spread, as indicated in the figure. In front of the gun we have a wall (made of armor plate) that has in it two holes just about big enough to let a bullet through. Beyond the wall is a backstop (say a thick wall of wood) which will “absorb” the bullets when they hit it. In front of the backstop we have an object which we shall call a “detector” of bullets. It might be a box containing sand. Any bullet that enters the detector will be stopped and accumulated. When we wish, we can empty the box and count the number of bullets that have been caught. The detector can be moved back and forth (in what we will call the $x$-direction). With this apparatus, we can find out experimentally the answer to the question: “What is the probability that a bullet which passes through the holes in the wall will arrive at the backstop at the distance $x$ from the center?” First, you should realize that we should talk about probability, because we cannot say definitely where any particular bullet will go. A bullet which happens to hit one of the holes may bounce off the edges of the hole, and may end up anywhere at all. By “probability” we mean the chance that the bullet will arrive at the detector, which we can measure by counting the number which arrive at the detector in a certain time and then taking the ratio of this number to the total number that hit the backstop during that time. Or, if we assume that the gun always shoots at the same rate during the measurements, the probability we want is just proportional to the number that reach the detector in some standard time interval. For our present purposes we would like to imagine a somewhat idealized experiment in which the bullets are not real bullets, but are indestructible bullets—they cannot break in half. In our experiment we find that bullets always arrive in lumps, and when we find something in the detector, it is always one whole bullet. If the rate at which the machine gun fires is made very low, we find that at any given moment either nothing arrives, or one and only one—exactly one—bullet arrives at the backstop. Also, the size of the lump certainly does not depend on the rate of firing of the gun. We shall say: “Bullets always arrive in identical lumps.” What we measure with our detector is the probability of arrival of a lump. And we measure the probability as a function of $x$. The result of such measurements with this apparatus (we have not yet done the experiment, so we are really imagining the result) are plotted in the graph drawn in part (c) of Fig. 37–1. In the graph we plot the probability to the right and $x$ vertically, so that the $x$-scale fits the diagram of the apparatus. We call the probability $P_{12}$ because the bullets may have come either through hole $1$ or through hole $2$. You will not be surprised that $P_{12}$ is large near the middle of the graph but gets small if $x$ is very large. You may wonder, however, why $P_{12}$ has its maximum value at $x = 0$. We can understand this fact if we do our experiment again after covering up hole $2$, and once more while covering up hole $1$. When hole $2$ is covered, bullets can pass only through hole $1$, and we get the curve marked $P_1$ in part (b) of the figure. As you would expect, the maximum of $P_1$ occurs at the value of $x$ which is on a straight line with the gun and hole $1$. When hole $1$ is closed, we get the symmetric curve $P_2$ drawn in the figure. $P_2$ is the probability distribution for bullets that pass through hole $2$. Comparing parts (b) and (c) of Fig. 37–1, we find the important result that \begin{equation} \label{Eq:I:37:1} P_{12}=P_1+P_2. \end{equation} The probabilities just add together. The effect with both holes open is the sum of the effects with each hole open alone. We shall call this result an observation of “no interference,” for a reason that you will see later. So much for bullets. They come in lumps, and their probability of arrival shows no interference.
1
37
Quantum Behavior
3
An experiment with waves
Now we wish to consider an experiment with water waves. The apparatus is shown diagrammatically in Fig. 37–2. We have a shallow trough of water. A small object labeled the “wave source” is jiggled up and down by a motor and makes circular waves. To the right of the source we have again a wall with two holes, and beyond that is a second wall, which, to keep things simple, is an “absorber,” so that there is no reflection of the waves that arrive there. This can be done by building a gradual sand “beach.” In front of the beach we place a detector which can be moved back and forth in the $x$-direction, as before. The detector is now a device which measures the “intensity” of the wave motion. You can imagine a gadget which measures the height of the wave motion, but whose scale is calibrated in proportion to the square of the actual height, so that the reading is proportional to the intensity of the wave. Our detector reads, then, in proportion to the energy being carried by the wave—or rather, the rate at which energy is carried to the detector. With our wave apparatus, the first thing to notice is that the intensity can have any size. If the source just moves a very small amount, then there is just a little bit of wave motion at the detector. When there is more motion at the source, there is more intensity at the detector. The intensity of the wave can have any value at all. We would not say that there was any “lumpiness” in the wave intensity. Now let us measure the wave intensity for various values of $x$ (keeping the wave source operating always in the same way). We get the interesting-looking curve marked $I_{12}$ in part (c) of the figure. We have already worked out how such patterns can come about when we studied the interference of electric waves. In this case we would observe that the original wave is diffracted at the holes, and new circular waves spread out from each hole. If we cover one hole at a time and measure the intensity distribution at the absorber we find the rather simple intensity curves shown in part (b) of the figure. $I_1$ is the intensity of the wave from hole $1$ (which we find by measuring when hole $2$ is blocked off) and $I_2$ is the intensity of the wave from hole $2$ (seen when hole $1$ is blocked). The intensity $I_{12}$ observed when both holes are open is certainly not the sum of $I_1$ and $I_2$. We say that there is “interference” of the two waves. At some places (where the curve $I_{12}$ has its maxima) the waves are “in phase” and the wave peaks add together to give a large amplitude and, therefore, a large intensity. We say that the two waves are “interfering constructively” at such places. There will be such constructive interference wherever the distance from the detector to one hole is a whole number of wavelengths larger (or shorter) than the distance from the detector to the other hole. At those places where the two waves arrive at the detector with a phase difference of $\pi$ (where they are “out of phase”) the resulting wave motion at the detector will be the difference of the two amplitudes. The waves “interfere destructively,” and we get a low value for the wave intensity. We expect such low values wherever the distance between hole $1$ and the detector is different from the distance between hole $2$ and the detector by an odd number of half-wavelengths. The low values of $I_{12}$ in Fig. 37–2 correspond to the places where the two waves interfere destructively. You will remember that the quantitative relationship between $I_1$, $I_2$, and $I_{12}$ can be expressed in the following way: The instantaneous height of the water wave at the detector for the wave from hole $1$ can be written as (the real part of) $\hat{h}_1e^{i\omega t}$, where the “amplitude” $\hat{h}_1$ is, in general, a complex number. The intensity is proportional to the mean squared height or, when we use the complex numbers, to $\abs{\hat{h}_1}^2$. Similarly, for hole $2$ the height is $\hat{h}_2e^{i\omega t}$ and the intensity is proportional to $\abs{\hat{h}_2}^2$. When both holes are open, the wave heights add to give the height $(\hat{h}_1 + \hat{h}_2)e^{i\omega t}$ and the intensity $\abs{\hat{h}_1 + \hat{h}_2}^2$. Omitting the constant of proportionality for our present purposes, the proper relations for interfering waves are \begin{equation} \label{Eq:I:37:2} I_1=\abs{\hat{h}_1}^2,\quad I_2=\abs{\hat{h}_2}^2,\quad I_{12}=\abs{\hat{h}_1+\hat{h}_2}^2. \end{equation} You will notice that the result is quite different from that obtained with bullets (Eq. 37.1). If we expand $\abs{\hat{h}_1 + \hat{h}_2}^2$ we see that \begin{equation} \label{Eq:I:37:3} \abs{\hat{h}_1 + \hat{h}_2}^2=\abs{\hat{h}_1}^2+\abs{\hat{h}_2}^2+ 2\abs{\hat{h}_1}\abs{\hat{h}_2}\cos\delta, \end{equation} where $\delta$ is the phase difference between $\hat{h}_1$ and $\hat{h}_2$. In terms of the intensities, we could write \begin{equation} \label{Eq:I:37:4} I_{12}=I_1+I_2+2\sqrt{I_1I_2}\cos\delta. \end{equation} The last term in (37.4) is the “interference term.” So much for water waves. The intensity can have any value, and it shows interference.
1
37
Quantum Behavior
4
An experiment with electrons
Now we imagine a similar experiment with electrons. It is shown diagrammatically in Fig. 37–3. We make an electron gun which consists of a tungsten wire heated by an electric current and surrounded by a metal box with a hole in it. If the wire is at a negative voltage with respect to the box, electrons emitted by the wire will be accelerated toward the walls and some will pass through the hole. All the electrons which come out of the gun will have (nearly) the same energy. In front of the gun is again a wall (just a thin metal plate) with two holes in it. Beyond the wall is another plate which will serve as a “backstop.” In front of the backstop we place a movable detector. The detector might be a geiger counter or, perhaps better, an electron multiplier, which is connected to a loudspeaker. We should say right away that you should not try to set up this experiment (as you could have done with the two we have already described). This experiment has never been done in just this way. The trouble is that the apparatus would have to be made on an impossibly small scale to show the effects we are interested in. We are doing a “thought experiment,” which we have chosen because it is easy to think about. We know the results that would be obtained because there are many experiments that have been done, in which the scale and the proportions have been chosen to show the effects we shall describe. The first thing we notice with our electron experiment is that we hear sharp “clicks” from the detector (that is, from the loudspeaker). And all “clicks” are the same. There are no “half-clicks.” We would also notice that the “clicks” come very erratically. Something like: click ….. click-click … click …….. click …. click-click …… click …, etc., just as you have, no doubt, heard a geiger counter operating. If we count the clicks which arrive in a sufficiently long time—say for many minutes—and then count again for another equal period, we find that the two numbers are very nearly the same. So we can speak of the average rate at which the clicks are heard (so-and-so-many clicks per minute on the average). As we move the detector around, the rate at which the clicks appear is faster or slower, but the size (loudness) of each click is always the same. If we lower the temperature of the wire in the gun the rate of clicking slows down, but still each click sounds the same. We would notice also that if we put two separate detectors at the backstop, one or the other would click, but never both at once. (Except that once in a while, if there were two clicks very close together in time, our ear might not sense the separation.) We conclude, therefore, that whatever arrives at the backstop arrives in “lumps.” All the “lumps” are the same size: only whole “lumps” arrive, and they arrive one at a time at the backstop. We shall say: “Electrons always arrive in identical lumps.” Just as for our experiment with bullets, we can now proceed to find experimentally the answer to the question: “What is the relative probability that an electron ‘lump’ will arrive at the backstop at various distances $x$ from the center?” As before, we obtain the relative probability by observing the rate of clicks, holding the operation of the gun constant. The probability that lumps will arrive at a particular $x$ is proportional to the average rate of clicks at that $x$. The result of our experiment is the interesting curve marked $P_{12}$ in part (c) of Fig. 37–3. Yes! That is the way electrons go.
1
37
Quantum Behavior
5
The interference of electron waves
Now let us try to analyze the curve of Fig. 37–3 to see whether we can understand the behavior of the electrons. The first thing we would say is that since they come in lumps, each lump, which we may as well call an electron, has come either through hole $1$ or through hole $2$. Let us write this in the form of a “Proposition”: Proposition A: Each electron either goes through hole $1$ or it goes through hole $2$. Assuming Proposition A, all electrons that arrive at the backstop can be divided into two classes: (1) those that come through hole $1$, and (2) those that come through hole $2$. So our observed curve must be the sum of the effects of the electrons which come through hole $1$ and the electrons which come through hole $2$. Let us check this idea by experiment. First, we will make a measurement for those electrons that come through hole $1$. We block off hole $2$ and make our counts of the clicks from the detector. From the clicking rate, we get $P_1$. The result of the measurement is shown by the curve marked $P_1$ in part (b) of Fig. 37–3. The result seems quite reasonable. In a similar way, we measure $P_2$, the probability distribution for the electrons that come through hole $2$. The result of this measurement is also drawn in the figure. The result $P_{12}$ obtained with both holes open is clearly not the sum of $P_1$ and $P_2$, the probabilities for each hole alone. In analogy with our water-wave experiment, we say: “There is interference.” \begin{equation} \label{Eq:I:37:5} \text{For electrons:}\quad P_{12}\neq P_1+P_2. \end{equation} How can such an interference come about? Perhaps we should say: “Well, that means, presumably, that it is not true that the lumps go either through hole $1$ or hole $2$, because if they did, the probabilities should add. Perhaps they go in a more complicated way. They split in half and …” But no! They cannot, they always arrive in lumps … “Well, perhaps some of them go through $1$, and then they go around through $2$, and then around a few more times, or by some other complicated path … then by closing hole $2$, we changed the chance that an electron that started out through hole $1$ would finally get to the backstop …” But notice! There are some points at which very few electrons arrive when both holes are open, but which receive many electrons if we close one hole, so closing one hole increased the number from the other. Notice, however, that at the center of the pattern, $P_{12}$ is more than twice as large as $P_1 + P_2$. It is as though closing one hole decreased the number of electrons which come through the other hole. It seems hard to explain both effects by proposing that the electrons travel in complicated paths. It is all quite mysterious. And the more you look at it the more mysterious it seems. Many ideas have been concocted to try to explain the curve for $P_{12}$ in terms of individual electrons going around in complicated ways through the holes. None of them has succeeded. None of them can get the right curve for $P_{12}$ in terms of $P_1$ and $P_2$. Yet, surprisingly enough, the mathematics for relating $P_1$ and $P_2$ to $P_{12}$ is extremely simple. For $P_{12}$ is just like the curve $I_{12}$ of Fig. 37–2, and that was simple. What is going on at the backstop can be described by two complex numbers that we can call $\hat{\phi}_1$ and $\hat{\phi}_2$ (they are functions of $x$, of course). The absolute square of $\hat{\phi}_1$ gives the effect with only hole $1$ open. That is, $P_1 = \abs{\hat{\phi}_1}^2$. The effect with only hole $2$ open is given by $\hat{\phi}_2$ in the same way. That is, $P_2 = \abs{\hat{\phi}_2}^2$. And the combined effect of the two holes is just $P_{12} = \abs{\hat{\phi}_1 + \hat{\phi}_2}^2$. The mathematics is the same as that we had for the water waves! (It is hard to see how one could get such a simple result from a complicated game of electrons going back and forth through the plate on some strange trajectory.) We conclude the following: The electrons arrive in lumps, like particles, and the probability of arrival of these lumps is distributed like the distribution of intensity of a wave. It is in this sense that an electron behaves “sometimes like a particle and sometimes like a wave.” Incidentally, when we were dealing with classical waves we defined the intensity as the mean over time of the square of the wave amplitude, and we used complex numbers as a mathematical trick to simplify the analysis. But in quantum mechanics it turns out that the amplitudes must be represented by complex numbers. The real parts alone will not do. That is a technical point, for the moment, because the formulas look just the same. Since the probability of arrival through both holes is given so simply, although it is not equal to $(P_1 + P_2)$, that is really all there is to say. But there are a large number of subtleties involved in the fact that nature does work this way. We would like to illustrate some of these subtleties for you now. First, since the number that arrives at a particular point is not equal to the number that arrives through $1$ plus the number that arrives through $2$, as we would have concluded from Proposition A, undoubtedly we should conclude that Proposition A is false. It is not true that the electrons go either through hole $1$ or hole $2$. But that conclusion can be tested by another experiment.
1
37
Quantum Behavior
6
Watching the electrons
We shall now try the following experiment. To our electron apparatus we add a very strong light source, placed behind the wall and between the two holes, as shown in Fig. 37–4. We know that electric charges scatter light. So when an electron passes, however it does pass, on its way to the detector, it will scatter some light to our eye, and we can see where the electron goes. If, for instance, an electron were to take the path via hole $2$ that is sketched in Fig. 37–4, we should see a flash of light coming from the vicinity of the place marked $A$ in the figure. If an electron passes through hole $1$ we would expect to see a flash from the vicinity of the upper hole. If it should happen that we get light from both places at the same time, because the electron divides in half … Let us just do the experiment! Here is what we see: every time that we hear a “click” from our electron detector (at the backstop), we also see a flash of light either near hole $1$ or near hole $2$, but never both at once! And we observe the same result no matter where we put the detector. From this observation we conclude that when we look at the electrons we find that the electrons go either through one hole or the other. Experimentally, Proposition A is necessarily true. What, then, is wrong with our argument against Proposition A? Why isn’t $P_{12}$ just equal to $P_1 + P_2$? Back to experiment! Let us keep track of the electrons and find out what they are doing. For each position ($x$-location) of the detector we will count the electrons that arrive and also keep track of which hole they went through, by watching for the flashes. We can keep track of things this way: whenever we hear a “click” we will put a count in Column $1$ if we see the flash near hole $1$, and if we see the flash near hole $2$, we will record a count in Column $2$. Every electron which arrives is recorded in one of two classes: those which come through $1$ and those which come through $2$. From the number recorded in Column $1$ we get the probability $P_1'$ that an electron will arrive at the detector via hole $1$; and from the number recorded in Column $2$ we get $P_2'$, the probability that an electron will arrive at the detector via hole $2$. If we now repeat such a measurement for many values of $x$, we get the curves for $P_1'$ and $P_2'$ shown in part (b) of Fig. 37–4. Well, that is not too surprising! We get for $P_1'$ something quite similar to what we got before for $P_1$ by blocking off hole $2$; and $P_2'$ is similar to what we got by blocking hole $1$. So there is not any complicated business like going through both holes. When we watch them, the electrons come through just as we would expect them to come through. Whether the holes are closed or open, those which we see come through hole $1$ are distributed in the same way whether hole $2$ is open or closed. But wait! What do we have now for the total probability, the probability that an electron will arrive at the detector by any route? We already have that information. We just pretend that we never looked at the light flashes, and we lump together the detector clicks which we have separated into the two columns. We must just add the numbers. For the probability that an electron will arrive at the backstop by passing through either hole, we do find $P_{12}' = P_1' + P_2'$. That is, although we succeeded in watching which hole our electrons come through, we no longer get the old interference curve $P_{12}$, but a new one, $P_{12}'$ showing no interference! If we turn out the light $P_{12}$ is restored. We must conclude that when we look at the electrons the distribution of them on the screen is different than when we do not look. Perhaps it is turning on our light source that disturbs things? It must be that the electrons are very delicate, and the light, when it scatters off the electrons, gives them a jolt that changes their motion. We know that the electric field of the light acting on a charge will exert a force on it. So perhaps we should expect the motion to be changed. Anyway, the light exerts a big influence on the electrons. By trying to “watch” the electrons we have changed their motions. That is, the jolt given to the electron when the photon is scattered by it is such as to change the electron’s motion enough so that if it might have gone to where $P_{12}$ was at a maximum it will instead land where $P_{12}$ was a minimum; that is why we no longer see the wavy interference effects. You may be thinking: “Don’t use such a bright source! Turn the brightness down! The light waves will then be weaker and will not disturb the electrons so much. Surely, by making the light dimmer and dimmer, eventually the wave will be weak enough that it will have a negligible effect.” O.K. Let’s try it. The first thing we observe is that the flashes of light scattered from the electrons as they pass by does not get weaker. It is always the same-sized flash. The only thing that happens as the light is made dimmer is that sometimes we hear a “click” from the detector but see no flash at all. The electron has gone by without being “seen.” What we are observing is that light also acts like electrons, we knew that it was “wavy,” but now we find that it is also “lumpy.” It always arrives—or is scattered—in lumps that we call “photons.” As we turn down the intensity of the light source we do not change the size of the photons, only the rate at which they are emitted. That explains why, when our source is dim, some electrons get by without being seen. There did not happen to be a photon around at the time the electron went through. This is all a little discouraging. If it is true that whenever we “see” the electron we see the same-sized flash, then those electrons we see are always the disturbed ones. Let us try the experiment with a dim light anyway. Now whenever we hear a click in the detector we will keep a count in three columns: in Column (1) those electrons seen by hole $1$, in Column (2) those electrons seen by hole $2$, and in Column (3) those electrons not seen at all. When we work up our data (computing the probabilities) we find these results: Those “seen by hole $1$” have a distribution like $P_1'$; those “seen by hole $2$” have a distribution like $P_2'$ (so that those “seen by either hole $1$ or $2$” have a distribution like $P_{12}'$); and those “not seen at all” have a “wavy” distribution just like $P_{12}$ of Fig. 37–3! If the electrons are not seen, we have interference! That is understandable. When we do not see the electron, no photon disturbs it, and when we do see it, a photon has disturbed it. There is always the same amount of disturbance because the light photons all produce the same-sized effects and the effect of the photons being scattered is enough to smear out any interference effect. Is there not some way we can see the electrons without disturbing them? We learned in an earlier chapter that the momentum carried by a “photon” is inversely proportional to its wavelength ($p = h/\lambda$). Certainly the jolt given to the electron when the photon is scattered toward our eye depends on the momentum that photon carries. Aha! If we want to disturb the electrons only slightly we should not have lowered the intensity of the light, we should have lowered its frequency (the same as increasing its wavelength). Let us use light of a redder color. We could even use infrared light, or radiowaves (like radar), and “see” where the electron went with the help of some equipment that can “see” light of these longer wavelengths. If we use “gentler” light perhaps we can avoid disturbing the electrons so much. Let us try the experiment with longer waves. We shall keep repeating our experiment, each time with light of a longer wavelength. At first, nothing seems to change. The results are the same. Then a terrible thing happens. You remember that when we discussed the microscope we pointed out that, due to the wave nature of the light, there is a limitation on how close two spots can be and still be seen as two separate spots. This distance is of the order of the wavelength of light. So now, when we make the wavelength longer than the distance between our holes, we see a big fuzzy flash when the light is scattered by the electrons. We can no longer tell which hole the electron went through! We just know it went somewhere! And it is just with light of this color that we find that the jolts given to the electron are small enough so that $P_{12}'$ begins to look like $P_{12}$—that we begin to get some interference effect. And it is only for wavelengths much longer than the separation of the two holes (when we have no chance at all of telling where the electron went) that the disturbance due to the light gets sufficiently small that we again get the curve $P_{12}$ shown in Fig. 37–3. In our experiment we find that it is impossible to arrange the light in such a way that one can tell which hole the electron went through, and at the same time not disturb the pattern. It was suggested by Heisenberg that the then new laws of nature could only be consistent if there were some basic limitation on our experimental capabilities not previously recognized. He proposed, as a general principle, his uncertainty principle, which we can state in terms of our experiment as follows: “It is impossible to design an apparatus to determine which hole the electron passes through, that will not at the same time disturb the electrons enough to destroy the interference pattern.” If an apparatus is capable of determining which hole the electron goes through, it cannot be so delicate that it does not disturb the pattern in an essential way. No one has ever found (or even thought of) a way around the uncertainty principle. So we must assume that it describes a basic characteristic of nature. The complete theory of quantum mechanics which we now use to describe atoms and, in fact, all matter depends on the correctness of the uncertainty principle. Since quantum mechanics is such a successful theory, our belief in the uncertainty principle is reinforced. But if a way to “beat” the uncertainty principle were ever discovered, quantum mechanics would give inconsistent results and would have to be discarded as a valid theory of nature. “Well,” you say, “what about Proposition A? It is true, or is it not true, that the electron either goes through hole $1$ or it goes through hole $2$?” The only answer that can be given is that we have found from experiment that there is a certain special way that we have to think in order that we do not get into inconsistencies. What we must say (to avoid making wrong predictions) is the following. If one looks at the holes or, more accurately, if one has a piece of apparatus which is capable of determining whether the electrons go through hole $1$ or hole $2$, then one can say that it goes either through hole $1$ or hole $2$. But, when one does not try to tell which way the electron goes, when there is nothing in the experiment to disturb the electrons, then one may not say that an electron goes either through hole $1$ or hole $2$. If one does say that, and starts to make any deductions from the statement, he will make errors in the analysis. This is the logical tightrope on which we must walk if we wish to describe nature successfully. If the motion of all matter—as well as electrons—must be described in terms of waves, what about the bullets in our first experiment? Why didn’t we see an interference pattern there? It turns out that for the bullets the wavelengths were so tiny that the interference patterns became very fine. So fine, in fact, that with any detector of finite size one could not distinguish the separate maxima and minima. What we saw was only a kind of average, which is the classical curve. In Fig. 37–5 we have tried to indicate schematically what happens with large-scale objects. Part (a) of the figure shows the probability distribution one might predict for bullets, using quantum mechanics. The rapid wiggles are supposed to represent the interference pattern one gets for waves of very short wavelength. Any physical detector, however, straddles several wiggles of the probability curve, so that the measurements show the smooth curve drawn in part (b) of the figure.
1
37
Quantum Behavior
7
First principles of quantum mechanics
We will now write a summary of the main conclusions of our experiments. We will, however, put the results in a form which makes them true for a general class of such experiments. We can write our summary more simply if we first define an “ideal experiment” as one in which there are no uncertain external influences, i.e., no jiggling or other things going on that we cannot take into account. We would be quite precise if we said: “An ideal experiment is one in which all of the initial and final conditions of the experiment are completely specified.” What we will call “an event” is, in general, just a specific set of initial and final conditions. (For example: “an electron leaves the gun, arrives at the detector, and nothing else happens.”) Now for our summary. Summary One might still like to ask: “How does it work? What is the machinery behind the law?” No one has found any machinery behind the law. No one can “explain” any more than we have just “explained.” No one will give you any deeper representation of the situation. We have no ideas about a more basic mechanism from which these results can be deduced. We would like to emphasize a very important difference between classical and quantum mechanics. We have been talking about the probability that an electron will arrive in a given circumstance. We have implied that in our experimental arrangement (or even in the best possible one) it would be impossible to predict exactly what would happen. We can only predict the odds! This would mean, if it were true, that physics has given up on the problem of trying to predict exactly what will happen in a definite circumstance. Yes! physics has given up. We do not know how to predict what would happen in a given circumstance, and we believe now that it is impossible, that the only thing that can be predicted is the probability of different events. It must be recognized that this is a retrenchment in our earlier ideal of understanding nature. It may be a backward step, but no one has seen a way to avoid it. We make now a few remarks on a suggestion that has sometimes been made to try to avoid the description we have given: “Perhaps the electron has some kind of internal works—some inner variables—that we do not yet know about. Perhaps that is why we cannot predict what will happen. If we could look more closely at the electron we could be able to tell where it would end up.” So far as we know, that is impossible. We would still be in difficulty. Suppose we were to assume that inside the electron there is some kind of machinery that determines where it is going to end up. That machine must also determine which hole it is going to go through on its way. But we must not forget that what is inside the electron should not be dependent on what we do, and in particular upon whether we open or close one of the holes. So if an electron, before it starts, has already made up its mind (a) which hole it is going to use, and (b) where it is going to land, we should find $P_1$ for those electrons that have chosen hole $1$, $P_2$ for those that have chosen hole $2$, and necessarily the sum $P_1+P_2$ for those that arrive through the two holes. There seems to be no way around this. But we have verified experimentally that that is not the case. And no one has figured a way out of this puzzle. So at the present time we must limit ourselves to computing probabilities. We say “at the present time,” but we suspect very strongly that it is something that will be with us forever—that it is impossible to beat that puzzle—that this is the way nature really is.
1
37
Quantum Behavior
8
The uncertainty principle
This is the way Heisenberg stated the uncertainty principle originally: If you make the measurement on any object, and you can determine the $x$-component of its momentum with an uncertainty $\Delta p$, you cannot, at the same time, know its $x$-position more accurately than $\Delta x\geq\hbar/2\Delta p$. The uncertainties in the position and momentum at any instant must have their product greater than or equal to half the reduced Planck constant. This is a special case of the uncertainty principle that was stated above more generally. The more general statement was that one cannot design equipment in any way to determine which of two alternatives is taken, without, at the same time, destroying the pattern of interference. Let us show for one particular case that the kind of relation given by Heisenberg must be true in order to keep from getting into trouble. We imagine a modification of the experiment of Fig. 37–3, in which the wall with the holes consists of a plate mounted on rollers so that it can move freely up and down (in the $x$-direction), as shown in Fig. 37–6. By watching the motion of the plate carefully we can try to tell which hole an electron goes through. Imagine what happens when the detector is placed at $x = 0$. We would expect that an electron which passes through hole $1$ must be deflected downward by the plate to reach the detector. Since the vertical component of the electron momentum is changed, the plate must recoil with an equal momentum in the opposite direction. The plate will get an upward kick. If the electron goes through the lower hole, the plate should feel a downward kick. It is clear that for every position of the detector, the momentum received by the plate will have a different value for a traversal via hole $1$ than for a traversal via hole $2$. So! Without disturbing the electrons at all, but just by watching the plate, we can tell which path the electron used. Now in order to do this it is necessary to know what the momentum of the screen is, before the electron goes through. So when we measure the momentum after the electron goes by, we can figure out how much the plate’s momentum has changed. But remember, according to the uncertainty principle we cannot at the same time know the position of the plate with an arbitrary accuracy. But if we do not know exactly where the plate is we cannot say precisely where the two holes are. They will be in a different place for every electron that goes through. This means that the center of our interference pattern will have a different location for each electron. The wiggles of the interference pattern will be smeared out. We shall show quantitatively in the next chapter that if we determine the momentum of the plate sufficiently accurately to determine from the recoil measurement which hole was used, then the uncertainty in the $x$-position of the plate will, according to the uncertainty principle, be enough to shift the pattern observed at the detector up and down in the $x$-direction about the distance from a maximum to its nearest minimum. Such a random shift is just enough to smear out the pattern so that no interference is observed. The uncertainty principle “protects” quantum mechanics. Heisenberg recognized that if it were possible to measure the momentum and the position simultaneously with a greater accuracy, the quantum mechanics would collapse. So he proposed that it must be impossible. Then people sat down and tried to figure out ways of doing it, and nobody could figure out a way to measure the position and the momentum of anything—a screen, an electron, a billiard ball, anything—with any greater accuracy. Quantum mechanics maintains its perilous but accurate existence.
1
38
The Relation of Wave and Particle Viewpoints
1
Probability wave amplitudes
In this chapter we shall discuss the relationship of the wave and particle viewpoints. We already know, from the last chapter, that neither the wave viewpoint nor the particle viewpoint is correct. Usually we have tried to present things accurately, or at least precisely enough that they will not have to be changed when we learn more—it may be extended, but it will not be changed! But when we try to talk about the wave picture or the particle picture, both are approximate, and both will change. Therefore what we learn in this chapter will not be accurate in a certain sense; it is a kind of half-intuitive argument that will be made more precise later, but certain things will be changed a little bit when we interpret them correctly in quantum mechanics. The reason for doing such a thing, of course, is that we are not going to go directly into quantum mechanics, but we want to have at least some idea of the kinds of effects that we will find. Furthermore, all our experiences are with waves and with particles, and so it is rather handy to use the wave and particle ideas to get some understanding of what happens in given circumstances before we know the complete mathematics of the quantum-mechanical amplitudes. We shall try to illustrate the weakest places as we go along, but most of it is very nearly correct—it is just a matter of interpretation. First of all, we know that the new way of representing the world in quantum mechanics—the new framework—is to give an amplitude for every event that can occur, and if the event involves the reception of one particle then we can give the amplitude to find that one particle at different places and at different times. The probability of finding the particle is then proportional to the absolute square of the amplitude. In general, the amplitude to find a particle in different places at different times varies with position and time. In a special case the amplitude varies sinusoidally in space and time like $e^{i(\omega t - \FLPk\cdot\FLPr)}$ (do not forget that these amplitudes are complex numbers, not real numbers) and involves a definite frequency $\omega$ and wave number $\FLPk$. Then it turns out that this corresponds to a classical limiting situation where we would have believed that we have a particle whose energy $E$ was known and is related to the frequency by \begin{equation} \label{Eq:I:38:1} E=\hbar\omega, \end{equation} and whose momentum $\FLPp$ is also known and is related to the wave number by \begin{equation} \label{Eq:I:38:2} \FLPp=\hbar\FLPk. \end{equation} This means that the idea of a particle is limited. The idea of a particle—its location, its momentum, etc.—which we use so much, is in certain ways unsatisfactory. For instance, if an amplitude to find a particle at different places is given by $e^{i(\omega t - \FLPk\cdot\FLPr)}$, whose absolute square is a constant, that would mean that the probability of finding a particle is the same at all points. That means we do not know where it is—it can be anywhere—there is a great uncertainty in its location. On the other hand, if the position of a particle is more or less well known and we can predict it fairly accurately, then the probability of finding it in different places must be confined to a certain region, whose length we call $\Delta x$. Outside this region, the probability is zero. Now this probability is the absolute square of an amplitude, and if the absolute square is zero, the amplitude is also zero, so that we have a wave train whose length is $\Delta x$ (Fig. 38–1), and the wavelength (the distance between nodes of the waves in the train) of that wave train is what corresponds to the particle momentum. Here we encounter a strange thing about waves; a very simple thing which has nothing to do with quantum mechanics strictly. It is something that anybody who works with waves, even if he knows no quantum mechanics, knows: namely, we cannot define a unique wavelength for a short wave train. Such a wave train does not have a definite wavelength; there is an indefiniteness in the wave number that is related to the finite length of the train, and thus there is an indefiniteness in the momentum.
1
38
The Relation of Wave and Particle Viewpoints
2
Measurement of position and momentum
Let us consider two examples of this idea—to see the reason why there is an uncertainty in the position and/or the momentum, if quantum mechanics is right. We have also seen before that if there were not such a thing—if it were possible to measure the position and the momentum of anything simultaneously—we would have a paradox; it is fortunate that we do not have such a paradox, and the fact that such an uncertainty comes naturally from the wave picture shows that everything is mutually consistent. Here is one example which shows the relationship between the position and the momentum in a circumstance that is easy to understand. Suppose we have a single slit, and particles are coming from very far away with a certain energy—so that they are all coming essentially horizontally (Fig. 38–2). We are going to concentrate on the vertical components of momentum. All of these particles have a certain horizontal momentum $p_0$, say, in a classical sense. So, in the classical sense, the vertical momentum $p_y$, before the particle goes through the hole, is definitely known. The particle is moving neither up nor down, because it came from a source that is far away—and so the vertical momentum is of course zero. But now let us suppose that it goes through a hole whose width is $B$. Then after it has come out through the hole, we know the position vertically—the $y$ position—with considerable accuracy—namely $\pm B$.1 That is, the uncertainty in position, $\Delta y$, is of order $B$. Now we might also want to say, since we know the momentum is absolutely horizontal, that $\Delta p_y$ is zero; but that is wrong. We once knew the momentum was horizontal, but we do not know it any more. Before the particles passed through the hole, we did not know their vertical positions. Now that we have found the vertical position by having the particle come through the hole, we have lost our information on the vertical momentum! Why? According to the wave theory, there is a spreading out, or diffraction, of the waves after they go through the slit, just as for light. Therefore there is a certain probability that particles coming out of the slit are not coming exactly straight. The pattern is spread out by the diffraction effect, and the angle of spread, which we can define as the angle of the first minimum, is a measure of the uncertainty in the final angle. How does the pattern become spread? To say it is spread means that there is some chance for the particle to be moving up or down, that is, to have a component of momentum up or down. We say chance and particle because we can detect this diffraction pattern with a particle counter, and when the counter receives the particle, say at $C$ in Fig. 38–2, it receives the entire particle, so that, in a classical sense, the particle has a vertical momentum, in order to get from the slit up to $C$. To get a rough idea of the spread of the momentum, the vertical momentum $p_y$ has a spread which is equal to $p_0\,\Delta\theta$, where $p_0$ is the horizontal momentum. And how big is $\Delta\theta$ in the spread-out pattern? We know that the first minimum occurs at an angle $\Delta\theta$ such that the waves from one edge of the slit have to travel one wavelength farther than the waves from the other side—we worked that out before (Chapter 30). Therefore $\Delta\theta$ is $\lambda/B$, and so $\Delta p_y$ in this experiment is $p_0\lambda/B$. Note that if we make $B$ smaller and make a more accurate measurement of the position of the particle, the diffraction pattern gets wider. Remember, when we closed the slits on the experiment with the microwaves, we had more intensity farther out. So the narrower we make the slit, the wider the pattern gets, and the more is the likelihood that we would find that the particle has sidewise momentum. Thus the uncertainty in the vertical momentum is inversely proportional to the uncertainty of $y$. In fact, we see that the product of the two is equal to $p_0\lambda$. But $\lambda$ is the wavelength and $p_0$ is the momentum, and in accordance with quantum mechanics, the wavelength times the momentum is Planck’s constant $h$. So we obtain the rule that the uncertainties in the vertical momentum and in the vertical position have a product of the order $h$: \begin{equation} \label{Eq:I:38:3} \Delta y\,\Delta p_y\geq\hbar/2. \end{equation} We cannot prepare a system in which we know the vertical position of a particle and can predict how it will move vertically with greater certainty than given by (38.3). That is, the uncertainty in the vertical momentum must exceed $\hbar/2\Delta y$, where $\Delta y$ is the uncertainty in our knowledge of the position. Sometimes people say quantum mechanics is all wrong. When the particle arrived from the left, its vertical momentum was zero. And now that it has gone through the slit, its position is known. Both position and momentum seem to be known with arbitrary accuracy. It is quite true that we can receive a particle, and on reception determine what its position is and what its momentum would have had to have been to have gotten there. That is true, but that is not what the uncertainty relation (38.3) refers to. Equation (38.3) refers to the predictability of a situation, not remarks about the past. It does no good to say “I knew what the momentum was before it went through the slit, and now I know the position,” because now the momentum knowledge is lost. The fact that it went through the slit no longer permits us to predict the vertical momentum. We are talking about a predictive theory, not just measurements after the fact. So we must talk about what we can predict. Now let us take the thing the other way around. Let us take another example of the same phenomenon, a little more quantitatively. In the previous example we measured the momentum by a classical method. Namely, we considered the direction and the velocity and the angles, etc., so we got the momentum by classical analysis. But since momentum is related to wave number, there exists in nature still another way to measure the momentum of a particle—photon or otherwise—which has no classical analog, because it uses Eq. (38.2). We measure the wavelengths of the waves. Let us try to measure momentum in this way. Suppose we have a grating with a large number of lines (Fig. 38–3), and send a beam of particles at the grating. We have often discussed this problem: if the particles have a definite momentum, then we get a very sharp pattern in a certain direction, because of the interference. And we have also talked about how accurately we can determine that momentum, that is to say, what the resolving power of such a grating is. Rather than derive it again, we refer to Chapter 30, where we found that the relative uncertainty in the wavelength that can be measured with a given grating is $1/Nm$, where $N$ is the number of lines on the grating and $m$ is the order of the diffraction pattern. That is, \begin{equation} \label{Eq:I:38:4} \Delta\lambda/\lambda=1/Nm. \end{equation} Now formula (38.4) can be rewritten as \begin{equation} \label{Eq:I:38:5} \Delta\lambda/\lambda^2=1/Nm\lambda=1/L, \end{equation} where $L$ is the distance shown in Fig. 38–3. This distance is the difference between the total distance that the particle or wave or whatever it is has to travel if it is reflected from the bottom of the grating, and the distance that it has to travel if it is reflected from the top of the grating. That is, the waves which form the diffraction pattern are waves which come from different parts of the grating. The first ones that arrive come from the bottom end of the grating, from the beginning of the wave train, and the rest of them come from later parts of the wave train, coming from different parts of the grating, until the last one finally arrives, and that involves a point in the wave train a distance $L$ behind the first point. So in order that we shall have a sharp line in our spectrum corresponding to a definite momentum, with an uncertainty given by (38.4), we have to have a wave train of at least length $L$. If the wave train is too short we are not using the entire grating. The waves which form the spectrum are being reflected from only a very short sector of the grating if the wave train is too short, and the grating will not work right—we will find a big angular spread. In order to get a narrower one, we need to use the whole grating, so that at least at some moment the whole wave train is scattering simultaneously from all parts of the grating. Thus the wave train must be of length $L$ in order to have an uncertainty in the wavelength less than that given by (38.5). Incidentally, \begin{equation} \label{Eq:I:38:6} \Delta\lambda/\lambda^2=\Delta(1/\lambda)=\Delta k/2\pi. \end{equation} Therefore \begin{equation} \label{Eq:I:38:7} \Delta k = 2\pi/L, \end{equation} where $L$ is the length of the wave train. This means that if we have a wave train whose length is less than $L$, the uncertainty in the wave number must exceed $2\pi/L$. Or the uncertainty in a wave number times the length of the wave train—we will call that for a moment $\Delta x$—exceeds $2\pi$. We call it $\Delta x$ because that is the uncertainty in the location of the particle. If the wave train exists only in a finite length, then that is where we could find the particle, within an uncertainty $\Delta x$. Now this property of waves, that the length of the wave train times the uncertainty of the wave number associated with it is at least $2\pi$, is a property that is known to everyone who studies them. It has nothing to do with quantum mechanics. It is simply that if we have a finite train, we cannot count the waves in it very precisely. Let us try another way to see the reason for that. Suppose that we have a finite train of length $L$; then because of the way it has to decrease at the ends, as in Fig. 38–1, the number of waves in the length $L$ is uncertain by something like $\pm1$. But the number of waves in $L$ is $kL/2\pi$. Thus $k$ is uncertain, and we again get the result (38.7), a property merely of waves. The same thing works whether the waves are in space and $k$ is the number of radians per centimeter and $L$ is the length of the train, or the waves are in time and $\omega$ is the number of radians per second and $T$ is the “length” in time that the wave train comes in. That is, if we have a wave train lasting only for a certain finite time $T$, then the uncertainty in the frequency is given by \begin{equation} \label{Eq:I:38:8} \Delta\omega=2\pi/T. \end{equation} We have tried to emphasize that these are properties of waves alone, and they are well known, for example, in the theory of sound. The point is that in quantum mechanics we interpret the wave number as being a measure of the momentum of a particle, with the rule that $p = \hbar k$, so that relation (38.7) tells us that $\Delta p\approx h/\Delta x$. This, then, is a limitation of the classical idea of momentum. (Naturally, it has to be limited in some ways if we are going to represent particles by waves!) It is nice that we have found a rule that gives us some idea of when there is a failure of classical ideas.
1
38
The Relation of Wave and Particle Viewpoints
3
Crystal diffraction
Next let us consider the reflection of particle waves from a crystal. A crystal is a thick thing which has a whole lot of similar atoms—we will include some complications later—in a nice array. The question is how to set the array so that we get a strong reflected maximum in a given direction for a given beam of, say, light (x-rays), electrons, neutrons, or anything else. In order to obtain a strong reflection, the scattering from all of the atoms must be in phase. There cannot be equal numbers in phase and out of phase, or the waves will cancel out. The way to arrange things is to find the regions of constant phase, as we have already explained; they are planes which make equal angles with the initial and final directions (Fig. 38–4). If we consider two parallel planes, as in Fig. 38–4, the waves scattered from the two planes will be in phase provided the difference in distance travelled by a wavefront is an integral number of wavelengths. This difference can be seen to be $2d\sin\theta$, where $d$ is the perpendicular distance between the planes. Thus the condition for coherent reflection is \begin{equation} \label{Eq:I:38:9} 2d\sin\theta=n\lambda\quad (n=1,2,\dotsc). \end{equation} If, for example, the crystal is such that the atoms happen to lie on planes obeying condition (38.9) with $n = 1$, then there will be a strong reflection. If, on the other hand, there are other atoms of the same nature (equal in density) halfway between, then the intermediate planes will also scatter equally strongly and will interfere with the others and produce no effect. So $d$ in (38.9) must refer to adjacent planes; we cannot take a plane five layers farther back and use this formula! As a matter of interest, actual crystals are not usually as simple as a single kind of atom repeated in a certain way. Instead; if we make a two-dimensional analog, they are much like wallpaper, in which there is some kind of figure which repeats all over the wallpaper. By “figure” we mean, in the case of atoms, some arrangement—calcium and a carbon and three oxygens, etc., for calcium carbonate, and so on—which may involve a relatively large number of atoms. But whatever it is, the figure is repeated in a pattern. This basic figure is called a unit cell. The basic pattern of repetition defines what we call the lattice type; the lattice type can be immediately determined by looking at the reflections and seeing what their symmetry is. In other words, where we find any reflections at all determines the lattice type, but in order to determine what is in each of the elements of the lattice one must take into account the intensity of the scattering at the various directions. Which directions scatter depends on the type of lattice, but how strongly each scatters is determined by what is inside each unit cell, and in that way the structure of crystals is worked out. Two photographs of x-ray diffraction patterns are shown in Figs. 38–5 and 38–6; they illustrate scattering from rock salt and myoglobin, respectively. Incidentally, an interesting thing happens if the spacings of the nearest planes are less than $\lambda/2$. In this case (38.9) has no solution for $n$. Thus if $\lambda$ is bigger than twice the distance between adjacent planes then there is no side diffraction pattern, and the light—or whatever it is—will go right through the material without bouncing off or getting lost. So in the case of light, where $\lambda$ is much bigger than the spacing, of course it does go through and there is no pattern of reflection from the planes of the crystal. This fact also has an interesting consequence in the case of piles which make neutrons (these are obviously particles, for anybody’s money!). If we take these neutrons and let them into a long block of graphite, the neutrons diffuse and work their way along (Fig. 38–7). They diffuse because they are bounced by the atoms, but strictly, in the wave theory, they are bounced by the atoms because of diffraction from the crystal planes. It turns out that if we take a very long piece of graphite, the neutrons that come out the far end are all of long wavelength! In fact, if one plots the intensity as a function of wavelength, we get nothing except for wavelengths longer than a certain minimum (Fig. 38–8). In other words, we can get very slow neutrons that way. Only the slowest neutrons come through; they are not diffracted or scattered by the crystal planes of the graphite, but keep going right through like light through glass, and are not scattered out the sides. There are many other demonstrations of the reality of neutron waves and waves of other particles.
1
38
The Relation of Wave and Particle Viewpoints
4
The size of an atom
We now consider another application of the uncertainty relation, Eq. (38.3). It must not be taken too seriously; the idea is right but the analysis is not very accurate. The idea has to do with the determination of the size of atoms, and the fact that, classically, the electrons would radiate light and spiral in until they settle down right on top of the nucleus. But that cannot be right quantum-mechanically because then we would know where each electron was and how fast it was moving. Suppose we have a hydrogen atom, and measure the position of the electron; we must not be able to predict exactly where the electron will be, or the momentum spread will then turn out to be infinite. Every time we look at the electron, it is somewhere, but it has an amplitude to be in different places so there is a probability of it being found in different places. These places cannot all be at the nucleus; we shall suppose there is a spread in position of order $a$. That is, the distance of the electron from the nucleus is usually about $a$. We shall determine $a$ by minimizing the total energy of the atom. The spread in momentum is roughly $\hbar/a$ because of the uncertainty relation, so that if we try to measure the momentum of the electron in some manner, such as by scattering x-rays off it and looking for the Doppler effect from a moving scatterer, we would expect not to get zero every time—the electron is not standing still—but the momenta must be of the order $p \approx \hbar/a$. Then the kinetic energy is roughly $\tfrac{1}{2}mv^2 = p^2/2m = \hbar^2/2ma^2$. (In a sense, this is a kind of dimensional analysis to find out in what way the kinetic energy depends upon the reduced Planck constant, upon $m$, and upon the size of the atom. We need not trust our answer to within factors like $2$, $\pi$, etc. We have not even defined $a$ very precisely.) Now the potential energy is minus $e^2$ over the distance from the center, say $-e^2/a$, where, we remember, $e^2$ is the charge of an electron squared, divided by $4\pi\epsO$. Now the point is that the potential energy is reduced if $a$ gets smaller, but the smaller $a$ is, the higher the momentum required, because of the uncertainty principle, and therefore the higher the kinetic energy. The total energy is \begin{equation} \label{Eq:I:38:10} E=\hbar^2/2ma^2-e^2/a. \end{equation} We do not know what $a$ is, but we know that the atom is going to arrange itself to make some kind of compromise so that the energy is as little as possible. In order to minimize $E$, we differentiate with respect to $a$, set the derivative equal to zero, and solve for $a$. The derivative of $E$ is \begin{equation} \label{Eq:I:38:11} dE/da=-\hbar^2/ma^3+e^2/a^2, \end{equation} and setting $dE/da = 0$ gives for $a$ the value \begin{align} a_0=\hbar^2/me^2 &=0.528\text{ angstrom},\notag\\[.5ex] \label{Eq:I:38:12} &=0.528\times10^{-10}\text{ meter}. \end{align} This particular distance is called the Bohr radius, and we have thus learned that atomic dimensions are of the order of angstroms, which is right: This is pretty good—in fact, it is amazing, since until now we have had no basis for understanding the size of atoms! Atoms are completely impossible from the classical point of view, since the electrons would spiral into the nucleus. Now if we put the value (38.12) for $a_0$ into (38.10) to find the energy, it comes out \begin{equation} \label{Eq:I:38:13} E_0=-e^2/2a_0=-me^4/2\hbar^2=-13.6\text{ eV}. \end{equation} What does a negative energy mean? It means that the electron has less energy when it is in the atom than when it is free. It means it is bound. It means it takes energy to kick the electron out; it takes energy of the order of $13.6$ eV to ionize a hydrogen atom. We have no reason to think that it is not two or three times this—or half of this—or $(1/\pi)$ times this, because we have used such a sloppy argument. However, we have cheated, we have used all the constants in such a way that it happens to come out the right number! This number, $13.6$ electron volts, is called a Rydberg of energy; it is the ionization energy of hydrogen. So we now understand why we do not fall through the floor. As we walk, our shoes with their masses of atoms push against the floor with its mass of atoms. In order to squash the atoms closer together, the electrons would be confined to a smaller space and, by the uncertainty principle, their momenta would have to be higher on the average, and that means high energy; the resistance to atomic compression is a quantum-mechanical effect and not a classical effect. Classically, we would expect that if we were to draw all the electrons and protons closer together, the energy would be reduced still further, and the best arrangement of positive and negative charges in classical physics is all on top of each other. This was well known in classical physics and was a puzzle because of the existence of the atom. Of course, the early scientists invented some ways out of the trouble—but never mind, we have the right way out, now! (Maybe.) Incidentally, although we have no reason to understand it at the moment, in a situation where there are many electrons it turns out that they try to keep away from each other. If one electron is occupying a certain space, then another does not occupy the same space. More precisely, there are two spin cases, so that two can sit on top of each other, one spinning one way and one the other way. But after that we cannot put any more there. We have to put others in another place, and that is the real reason that matter has strength. If we could put all the electrons in the same place it would condense even more than it does. It is the fact that the electrons cannot all get on top of each other that makes tables and everything else solid. Obviously, in order to understand the properties of matter, we will have to use quantum mechanics and not be satisfied with classical mechanics.