book_volume
stringclasses 3
values | book_title
stringclasses 1
value | chapter_number
stringlengths 1
2
| chapter_title
stringlengths 5
79
| section_number
stringclasses 9
values | section_title
stringlengths 4
93
| section_text
stringlengths 868
48.5k
|
---|---|---|---|---|---|---|
2 | 39 | Elastic Materials | 4 | Nonelastic behavior | In all that has been said so far, we have assumed that stress is proportional to strain; in general, that is not true. Figure 39–8 shows a typical stress-strain curve for a ductile material. For small strains, the stress is proportional to the strain. Eventually, however, after a certain point, the relationship between stress and strain begins to deviate from a straight line. For many materials—the ones we would call “brittle”—the object breaks for strains only a little above the point where the curve starts to bend over. In general, there are other complications in the stress-strain relationship. For example, if you strain an object, the stresses may be high at first, but decrease slowly with time. Also if you go to high stresses, but still not to the “breaking” point, when you lower the strain the stress will return along a different curve. There is a small hysteresis effect (like the one we saw between $B$ and $H$ in magnetic materials). The stress at which a material will break varies widely from one material to another. Some materials will break when the maximum tensile stress reaches a certain value. Other materials will fail when the maximum shear stress reaches a certain value. Chalk is an example of a material which is much weaker in tension than in shear. If you pull on the ends of a piece of blackboard chalk, the chalk will break perpendicular to the direction of the applied stress, as shown in Fig. 39–9(a). It breaks perpendicular to the applied force because it is only a bunch of particles packed together which are easily pulled apart. The material is, however, much harder to shear, because the particles get in each other’s way. Now you will remember that when we had a rod in torsion there was a shear all around it. Also, we showed that a shear was equivalent to a combination of a tension and compression at $45^\circ$. For these reasons, if you twist a piece of blackboard chalk, it will break along a complicated surface which starts out at $45^\circ$ to the axis. A photograph of a piece of chalk broken in this way is shown in Fig. 39–9(b). The chalk breaks where the material is in maximum tension. Other materials behave in strange and complicated ways. The more complicated the materials are, the more interesting their behavior. If we take a sheet of “Saran-Wrap” and crumple it up into a ball and throw it on the table, it slowly unfolds itself and returns toward its original flat form. At first sight, we might be tempted to think that it is inertia which prevents it from returning to its original form. However, a simple calculation shows that the inertia is several orders of magnitude too small to account for the effect. There appear to be two important competing effects: “something” inside the material “remembers” the shape it had initially and “tries” to get back there, but something else “prefers” the new shape and “resists” the return to the old shape. We will not attempt to describe the mechanism at play in the Saran plastic, but you can get an idea of how such an effect might come about from the following model. Suppose you imagine a material made of long, flexible, but strong, fibers mixed together with some hollow cells filled with a viscous liquid. Imagine also that there are narrow pathways from one cell to the next so the liquid can leak slowly from a cell to its neighbor. When we crumple a sheet of this stuff, we distort the long fibers, squeezing the liquid out of the cells in one place and forcing it into other cells which are being stretched. When we let go, the long fibers try to return to their original shape. But to do this, they have to force the liquid back to its original location—which will happen relatively slowly because of the viscosity. The forces we apply in crumpling the sheet are much larger than the forces exerted by the fibers. We can crumple the sheet quickly, but it will return more slowly. It is undoubtedly a combination of large stiff molecules and smaller, movable ones in the Saran-Wrap that is responsible for its behavior. This idea also fits with the fact that the material returns more quickly to its original shape when it’s warmed up than when it’s cold—the heat increases the mobility (decreases the viscosity) of the smaller molecules. Although we have been discussing how Hooke’s law breaks down, the remarkable thing is perhaps not that Hooke’s law breaks down for large strains but that it should be so generally true. We can get some idea of why this might be by looking at the strain energy in a material. To say that the stress is proportional to the strain is the same thing as saying that the strain energy varies as the square of the strain. Suppose we have a rod and we twist it through a small angle $\theta$. If Hooke’s law holds, the strain energy should be proportional to the square of $\theta$. Suppose we were to assume that the energy were some arbitrary function of the angle; we could write it as a Taylor expansion about zero angle \begin{equation} \label{Eq:II:39:40} U(\theta)=U(0)+U'(0)\,\theta+\tfrac{1}{2}U''(0)\,\theta^2+ \tfrac{1}{6}U'''(0)\,\theta^3+\dotsb \end{equation}
\begin{align} U(\theta)=U(0)&+U'(0)\,\theta+\!\tfrac{1}{2}U''(0)\,\theta^2\notag\\[.5ex] \label{Eq:II:39:40} &+\tfrac{1}{6}U'''(0)\,\theta^3\!+\dotsb \end{align} The torque $\tau$ is the derivative of $U$ with respect to angle; we would have \begin{equation} \label{Eq:II:39:41} \tau(\theta)=U'(0)+U''(0)\,\theta+\!\tfrac{1}{2}U'''(0)\,\theta^2\!+\dotsb \end{equation} Now if we measure our angles from the equilibrium position, the first term is zero. So the first remaining term is proportional to $\theta$; and for small enough angles, it will dominate the term in $\theta^2$. [Actually, materials are sufficiently symmetric internally so that $\tau(\theta)=-\tau(-\theta)$; the term in $\theta^2$ will be zero, and the departures from linearity would come only from the $\theta^3$ term. There is, however, no reason why this should be true for compressions and tensions.] The thing we have not explained is why materials usually break soon after the higher-order terms become significant. |
|
2 | 39 | Elastic Materials | 5 | Calculating the elastic constants | As our last topic on elasticity we would like to show how one could try to calculate the elastic constants of a material, starting with some knowledge of the properties of the atoms which make up the material. We will take only the simple case of an ionic cubic crystal like sodium chloride. When a crystal is strained, its volume or its shape is changed. Such changes result in an increase in the potential energy of the crystal. To calculate the change in strain energy, we have to know where each atom goes. In complicated crystals, the atoms will rearrange themselves in the lattice in very complicated ways to make the total energy as small as possible. This makes the computation of the strain energy rather difficult. In the case of a simple cubic crystal, however, it is easy to see what will happen. The distortions inside the crystal will be geometrically similar to the distortions of the outside boundaries of the crystal. We can calculate the elastic constants for a cubic crystal in the following way. First, we assume some force law between each pair of atoms in the crystal. Then, we calculate the change in the internal energy of the crystal when it is distorted from its equilibrium shape. This gives us a relation between the energy and the strains which is quadratic in all the strains. Comparing the energy obtained this way with Eq. (39.13), we can identify the coefficient of each term with the elastic constants $C_{ijkl}$. For our example we will assume a simple force law: that the force between neighboring atoms is a central force, by which we mean that it acts along the line between the two atoms. We would expect the forces in ionic crystals to be like this, since they are just primarily Coulomb forces. (The forces of covalent bonds are usually more complicated, since they can exert a sideways push on a nearby atom; we will leave out this complication.) We are also going to include only the forces between each atom and its nearest and next-nearest neighbors. In other words, we will make an approximation which neglects all forces beyond the next-nearest neighbor. The forces we will include are shown for the $xy$-plane in Fig. 39–10(a). The corresponding forces in the $yz$- and $zx$-planes also have to be included. Since we are only interested in the elastic coefficients which apply to small strains, and therefore only want the terms in the energy which vary quadratically with the strains, we can imagine that the force between each atom pair varies linearly with the displacements. We can then imagine that each pair of atoms is joined by a linear spring, as drawn in Fig. 39–10(b). All of the springs between a sodium atom and a chlorine atom should have the same spring constant, say $k_1$. The springs between two sodiums and between two chlorines could have different constants, but we will make our discussion simpler by taking them equal; we call them $k_2$. (We could come back later and make them different after we have seen how the calculations go.) Now we assume that the crystal is distorted by a homogeneous strain described by the strain tensor $e_{ij}$. In general, it will have components involving $x$, $y$, and $z$; but we will consider now only a strain with the three components $e_{xx}$, $e_{xy}$, and $e_{yy}$ so that it will be easy to visualize. If we pick one atom as our origin, the displacement of every other atom is given by equations like Eq. (39.9): \begin{equation} \begin{aligned} u_x&=e_{xx}x+e_{xy}y,\\ u_y&=e_{xy}x+e_{yy}y. \end{aligned} \label{Eq:II:39:42} \end{equation} Suppose we call the atom at $x=y=0$ “atom $1$” and number its neighbors in the $xy$-plane as shown in Fig. 39–11. Calling the lattice constant $a$, we get the $x$ and $y$ displacements $u_x$ and $u_y$ listed in Table 39–1. Now we can calculate the energy stored in the springs, which is $k/2$ times the square of the extension for each spring. For example, the energy in the horizontal spring between atom $1$ and atom $2$ is \begin{equation} \label{Eq:II:39:43} \frac{k_1(e_{xx}a)^2}{2}. \end{equation} Note that to first order, the $y$-displacement of atom $2$ does not change the length of the spring between atom $1$ and atom $2$. To get the strain energy in a diagonal spring, such as that to atom $3$, however, we need to calculate the change in length due to both the horizontal and vertical displacements. For small displacements from the original cube, we can write the change in the distance to atom $3$ as the sum of the components of $u_x$ and $u_y$ in the diagonal direction, namely as \begin{equation*} \frac{1}{\sqrt{2}}\,(u_x+u_y). \end{equation*} Using the values of $u_x$ and $u_y$ from the table, we get the energy \begin{equation} \label{Eq:II:39:44} \frac{k_2}{2}\!\biggl(\!\frac{u_x\!+u_y}{\sqrt{2}}\!\biggr)^2\!\!\!=\! \frac{k_2a^2}{4}(e_{xx}\!+e_{yx}\!+e_{xy}\!+e_{yy})^2. \end{equation} For the total energy for all the springs in the $xy$-plane, we need the sum of eight terms like (39.43) and (39.44). Calling this energy $U_0$, we get \begin{align} U_0=\,&\frac{a^2}{2}\bigg\{\!k_1e_{xx}^2\!+\! \frac{k_2}{2}(e_{xx}\!+e_{yx}\!+e_{xy}\!+e_{yy})^2\notag\\[-2pt] &+k_1e_{yy}^2\!+\! \frac{k_2}{2}(e_{xx}\!-e_{yx}\!-e_{xy}\!+e_{yy})^2\notag\\ &+k_1e_{xx}^2\!+\! \frac{k_2}{2}(e_{xx}\!+e_{yx}\!+e_{xy}\!+e_{yy})^2\notag\\ \label{Eq:II:39:45} &+k_1e_{yy}^2\!+\! \frac{k_2}{2}(e_{xx}\!-e_{yx}\!-e_{xy}\!+e_{yy})^2\!\biggr\}. \end{align} To get the total energy of all the springs connected to atom $1$, we must make one addition to the energy in Eq. (39.45). Even though we have only $x$- and $y$-components of the strain, there are still some energies associated with the next-nearest neighbors off the $xy$-plane. This additional energy is \begin{equation} \label{Eq:II:39:46} k_2(e_{xx}^2a^2+e_{yy}^2a^2). \end{equation} The elastic constants are related to the energy density $w$ by Eq. (39.13). The energy we have calculated is the energy associated with one atom, or rather, it is twice the energy per atom, since one-half of the energy of each spring should be assigned to each of the two atoms it joins. Since there are $1/a^3$ atoms per unit volume, $w$ and $U_0$ are related by \begin{equation*} w=\frac{U_0}{2a^3}. \end{equation*} To find the elastic constants $C_{ijkl}$, we need only to expand out the squares in Eq. (39.45)—adding the terms of (39.46)—and compare the coefficients of $e_{ij}e_{kl}$ with the corresponding coefficient in Eq. (39.13). For example, collecting the terms in $e_{xx}^2$ and in $e_{yy}^2$, we get the factor \begin{equation*} (k_1+2k_2)a^2, \end{equation*} so \begin{equation*} C_{xxxx}=C_{yyyy}=\frac{k_1+2k_2}{a}. \end{equation*} For the remaining terms, there is a slight complication. Since we cannot distinguish the product of two terms like $e_{xx}e_{yy}$, from $e_{yy}e_{xx}$, the coefficient of such terms in our energy is equal to the sum of two terms in Eq. (39.13). The coefficient of $e_{xx}e_{yy}$ in Eq. (39.45) is $2k_2$, so we have that \begin{equation*} (C_{xxyy}+C_{yyxx})=\frac{2k_2}{a}. \end{equation*} But because of the symmetry in our crystal, $C_{xxyy}=C_{yyxx}$, so we have that \begin{equation*} C_{xxyy}=C_{yyxx}=\frac{k_2}{a}. \end{equation*} By a similar process, we can also get \begin{equation*} C_{xyxy}=C_{yxyx}=\frac{k_2}{a}. \end{equation*} Finally, you will notice that any term which involves either $x$ or $y$ only once is zero—as we concluded earlier from symmetry arguments. Summarizing our results: \begin{equation} \begin{aligned} C_{xxxx}&=C_{yyyy}=\frac{k_1+2k_2}{a},\\[-.5ex] C_{xyxy}&=C_{yxyx}=\frac{k_2}{a},\\[-.5ex] C_{xxyy}&=C_{yyxx}=C_{xyyx}=C_{yxxy}=\frac{k_2}{a},\\[.5ex] C_{xxxy}&=C_{xyyy}=\text{etc.}=0. \end{aligned} \label{Eq:II:39:47} \end{equation} We have been able to relate the bulk elastic constants to the atomic properties which appear in the constants $k_1$ and $k_2$. In our particular case, $C_{xyxy}=C_{xxyy}$. It turns out—as you can perhaps see from the way the calculations went—that these terms are always equal for a cubic crystal, no matter how many force terms are taken into account, provided only that the forces act along the line joining each pair of atoms—that is, so long as the forces between atoms are like springs and don’t have a sideways part such as you might get from a cantilevered beam (and you do get in covalent bonds). We can check this conclusion with the experimental measurements of the elastic constants. In Table 39–2 we give the observed values of the three elastic coefficients for several cubic crystals.2 You will notice that $C_{xxyy}$ and $C_{xyxy}$ are, in general, not equal. The reason is that in metals like sodium and potassium the interatomic forces are not along the line joining the atoms, as we assumed in our model. Diamond does not obey the law either, because the forces in diamond are covalent forces and have some directional properties—the bonds would prefer to be at the tetrahedral angle. The ionic crystals like lithium fluoride, sodium chloride, and so on, do have nearly all the physical properties assumed in our model, and the table shows that the constants $C_{xxyy}$ and $C_{xyxy}$ are almost equal. It is not clear why silver chloride should not satisfy the condition that $C_{xxyy}=C_{xyxy}$. |
|
2 | 40 | The Flow of Dry Water | 1 | Hydrostatics | The subject of the flow of fluids, and particularly of water, fascinates everybody. We can all remember, as children, playing in the bathtub or in mud puddles with the strange stuff. As we get older, we watch streams, waterfalls, and whirlpools, and we are fascinated by this substance which seems almost alive relative to solids. The behavior of fluids is in many ways very unexpected and interesting—it is the subject of this chapter and the next. The efforts of a child trying to dam a small stream flowing in the street and his surprise at the strange way the water works its way out has its analog in our attempts over the years to understand the flow of fluids. We have tried to dam the water up—in our understanding—by getting the laws and the equations that describe the flow. We will describe these attempts in this chapter. In the next chapter, we will describe the unique way in which water has broken through the dam and escaped our attempts to understand it. We suppose that the elementary properties of water are already known to you. The main property that distinguishes a fluid from a solid is that a fluid cannot maintain a shear stress for any length of time. If a shear is applied to a fluid, it will move under the shear. Thicker liquids like honey move less easily than fluids like air or water. The measure of the ease with which a fluid yields is its viscosity. In this chapter we will consider only situations in which the viscous effects can be ignored. The effects of viscosity will be taken up in the next chapter. We begin by considering hydrostatics, the theory of liquids at rest. When liquids are at rest, there are no shear forces (even for viscous liquids). The law of hydrostatics, therefore, is that the stresses are always normal to any surface inside the fluid. The normal force per unit area is called the pressure. From the fact that there is no shear in a static fluid it follows that the pressure stress is the same in all directions (Fig. 40-1). We will let you entertain yourself by proving that if there is no shear on any plane in a fluid, the pressure must be the same in any direction. The pressure in a fluid may vary from place to place. For example, in a static fluid at the earth’s surface the pressure will vary with height because of the weight of the fluid. If the density $\rho$ of the fluid is considered constant, and if the pressure at some arbitrary zero level is called $p_0$ (Fig. 40-2), then the pressure at a height $h$ above this point is $p=p_0-\rho gh$, where $g$ is the gravitational force per unit mass. The combination \begin{equation*} p+\rho gh \end{equation*} is, therefore, a constant in the static fluid. This relation is familiar to you, but we will now derive a more general result of which it is a special case. If we take a small cube of water, what is the net force on it from the pressure? Since the pressure at any place is the same in all directions, there can be a net force per unit volume only because the pressure varies from one point to another. Suppose that the pressure is varying in the $x$-direction—and we take the coordinate directions parallel to the cube edges. The pressure on the face at $x$ gives the force $p\,\Delta y\,\Delta z$ (Fig. 40-3), and the pressure on the face at $x+\Delta x$ gives the force $-[p+(\ddpl{p}{x})\,\Delta x]\,\Delta y\,\Delta z$, so that the resultant force is $-(\ddpl{p}{x})\,\Delta x\,\Delta y\,\Delta z$. If we take the remaining pairs of faces of the cube, we easily see that the pressure force per unit volume is $-\FLPgrad{p}$. If there are other forces in addition—such as gravity—then the pressure must balance them to give equilibrium. Let’s take a circumstance in which such an additional force can be described by a potential energy, as would be true in the case of gravitation; we will let $\phi$ stand for the potential energy per unit mass. (For gravity, for instance, $\phi$ is just $gz$.) The force per unit mass is given in terms of the potential by $-\FLPgrad{\phi}$, and if $\rho$ is the density of the fluid, the force per unit volume is $-\rho\,\FLPgrad{\phi}$. For equilibrium this force per unit volume added to the pressure force per unit volume must give zero: \begin{equation} \label{Eq:II:40:1} -\FLPgrad{p}-\rho\,\FLPgrad{\phi}=\FLPzero. \end{equation} Equation (40.1) is the equation of hydrostatics. In general, it has no solution. If the density varies in space in an arbitrary way, there is no way for the forces to be in balance, and the fluid cannot be in static equilibrium. Convection currents will start up. We can see this from the equation since the pressure term is a pure gradient, whereas for variable $\rho$ the other term is not. Only when $\rho$ is a constant is the potential term a pure gradient. Then the equation has a solution \begin{equation*} p+\rho\phi=\text{const}. \end{equation*} Another possibility which allows hydrostatic equilibrium is for $\rho$ to be a function only of $p$. However, we will leave the subject of hydrostatics because it is not nearly so interesting as the situation when fluids are in motion. |
|
2 | 40 | The Flow of Dry Water | 2 | The equations of motion | First, we will discuss fluid motions in a purely abstract, theoretical way and then consider special examples. To describe the motion of a fluid, we must give its properties at every point. For example, at different places, the water (let us call the fluid “water”) is moving with different velocities. To specify the character of the flow, therefore, we must give the three components of velocity at every point and for any time. If we can find the equations that determine the velocity, then we would know how the liquid moves at all times. The velocity, however, is not the only property that the fluid has which varies from point to point. We have just discussed the variation of the pressure from point to point. And there are still other variables. There may also be a variation of density from point to point. In addition, the fluid may be a conductor and carry an electric current whose density $\FLPj$ varies from point to point in magnitude and direction. There may be a temperature which varies from point to point, or a magnetic field, and so on. So the number of fields needed to describe the complete situation will depend on how complicated the problem is. There are interesting phenomena when currents and magnetism play a dominant part in determining the behavior of the fluid; the subject is called magnetohydrodynamics, and great attention is being paid to it at the present time. However, we are not going to consider these more complicated situations because there are already interesting phenomena at a lower level of complexity, and even the more elementary level will be complicated enough. We will take the situation where there is no magnetic field and no conductivity, and we will not worry about the temperature because we will suppose that the density and pressure determine in a unique manner the temperature at any point. As a matter of fact, we will reduce the complexity of our work by making the assumption that the density is a constant—we imagine that the fluid is essentially incompressible. Putting it another way, we are supposing that the variations of pressure are so small that the changes in density produced thereby are negligible. If that is not the case, we would encounter phenomena additional to the ones we will be discussing here—for example, the propagation of sound or of shock waves. We have already discussed the propagation of sound and shocks to some extent, so we will now isolate our consideration of hydrodynamics from these other phenomena by making the approximation that the density $\rho$ is a constant. It is easy to determine when the approximation of constant $\rho$ is a good one. We can say that if the velocities of flow are much less than the speed of a sound wave in the fluid, we do not have to worry about variations in density. The escape that water makes in our attempts to understand it is not related to the approximation of constant density. The complications that do permit the escape will be discussed in the next chapter. In the general theory of fluids one must begin with an equation of state for the fluid which connects the pressure to the density. In our approximation this equation of state is simply \begin{equation*} \rho=\text{const}. \end{equation*} This then is the first relation for our variables. The next relation expresses the conservation of matter—if matter flows away from a point, there must be a decrease in the amount left behind. If the fluid velocity is $\FLPv$, then the mass which flows in a unit time across a unit area of surface is the component of $\rho\FLPv$ normal to the surface. We have had a similar relation in electricity. We also know from electricity that the divergence of such a quantity gives the rate of decrease of the density per unit time. In the same way, the equation \begin{equation} \label{Eq:II:40:2} \FLPdiv{(\rho\FLPv)}=-\ddp{\rho}{t} \end{equation} expresses the conservation of mass for a fluid; it is the hydrodynamic equation of continuity. In our approximation, which is the incompressible fluid approximation, $\rho$ is a constant, and the equation of continuity is simply \begin{equation} \label{Eq:II:40:3} \FLPdiv{\FLPv}=0. \end{equation} The fluid velocity $\FLPv$—like the magnetic field $\FLPB$—has zero divergence. (The hydrodynamic equations are often closely analogous to the electrodynamic equations; that’s why we studied electrodynamics first. Some people argue the other way; they think that one should study hydrodynamics first so that it will be easier to understand electricity afterwards. But electrodynamics is really much easier than hydrodynamics.) We will get our next equation from Newton’s law which tells us how the velocity changes because of the forces. The mass of an element of volume of the fluid times its acceleration must be equal to the force on the element. Taking an element of unit volume, and writing the force per unit volume as $\FLPf$, we have \begin{equation*} \rho\times(\text{acceleration})=\FLPf. \end{equation*} We will write the force density as the sum of three terms. We have already considered the pressure force per unit volume, $-\FLPgrad{p}$. Then there are the “external” forces which act at a distance—like gravity or electricity. When they are conservative forces with a potential per unit mass, $\phi$, they give a force density $-\rho\,\FLPgrad{\phi}$. (If the external forces are not conservative, we would have to write $\FLPf_{\text{ext}}$ for the external force per unit volume.) Then there is another “internal” force per unit volume, which is due to the fact that in a flowing fluid there can also be a shearing stress. This is called the viscous force, which we will write $\FLPf_{\text{visc}}$. Our equation of motion is \begin{equation} \label{Eq:II:40:4} \rho\times(\text{acceleration})= -\FLPgrad{p}-\rho\,\FLPgrad{\phi}+\FLPf_{\text{visc}}. \end{equation} For this chapter we are going to suppose that the liquid is “thin” in the sense that the viscosity is unimportant, so we will omit $\FLPf_{\text{visc}}$. When we drop the viscosity term, we will be making an approximation which describes some ideal stuff rather than real water. John von Neumann was well aware of the tremendous difference between what happens when you don’t have the viscous terms and when you do, and he was also aware that, during most of the development of hydrodynamics until about 1900, almost the main interest was in solving beautiful mathematical problems with this approximation which had almost nothing to do with real fluids. He characterized the theorist who made such analyses as a man who studied “dry water.” Such analyses leave out an essential property of the fluid. It is because we are leaving this property out of our calculations in this chapter that we have given it the title “The Flow of Dry Water.” We are postponing a discussion of real water to the next chapter. If we leave out $\FLPf_{\text{visc}}$, we have in Eq. (40.4) everything we need except an expression for the acceleration. You might think that the formula for the acceleration of a fluid particle would be very simple, for it seems obvious that if $\FLPv$ is the velocity of a fluid particle at some place in the fluid, the acceleration would just be $\ddpl{\FLPv}{t}$. It is not—and for a rather subtle reason. The derivative $\ddpl{\FLPv}{t}$, is the rate at which the velocity $\FLPv(x,y,z,t)$ changes at a fixed point in space. What we need is how fast the velocity changes for a particular piece of fluid. Imagine that we mark one of the drops of water with a colored speck so we can watch it. In a small interval of time $\Delta t$, this drop will move to a different location. If the drop is moving along some path as sketched in Fig. 40-4, it might in $\Delta t$ move from $P_1$ to $P_2$. In fact, it will move in the $x$-direction by an amount $v_x\,\Delta t$, in the $y$-direction by the amount $v_y\,\Delta t$, and in the $z$-direction by the amount $v_z\,\Delta t$. We see that, if $\FLPv(x,y,z,t)$ is the velocity of the fluid particle which is at $(x,y,z)$ at the time $t$, then the velocity of the same particle, at the time $t+\Delta t$ is given by $\FLPv(x+\Delta x,y+\Delta y,z+\Delta z,t+\Delta t)$—with \begin{equation*} \Delta x=v_x\,\Delta t,\quad \Delta y=v_y\,\Delta t,\quad \text{and}\quad \Delta z=v_z\,\Delta t. \end{equation*} From the definition of the partial derivatives—recall Eq. (2.7)—we have, to first order, that \begin{align*} \FLPv(x+\;&v_x\,\Delta t,y+v_y\,\Delta t,z+v_z\,\Delta t,t+\Delta t)\\[2pt] &=\FLPv(x,y,z,t)+ \ddp{\FLPv}{x}\,v_x\,\Delta t+ \ddp{\FLPv}{y}\,v_y\,\Delta t+ \ddp{\FLPv}{z}\,v_z\,\Delta t+ \ddp{\FLPv}{t}\,\Delta t. \end{align*}
\begin{gather*} \FLPv(x+v_x\Delta t,\,y+v_y\Delta t,\,z+v_z\Delta t,\,t+\Delta t)=\\[1.5ex] \FLPv(x,y,z,t)\!+\! \ddp{\FLPv}{x}\,v_x\Delta t+\! \ddp{\FLPv}{y}\,v_y\Delta t+\! \ddp{\FLPv}{z}\,v_z\Delta t+\! \ddp{\FLPv}{t}\,\Delta t. \end{gather*} The acceleration $\Delta\FLPv/\Delta t$ is \begin{equation*} v_x\,\ddp{\FLPv}{x}+v_y\,\ddp{\FLPv}{y}+v_z\,\ddp{\FLPv}{z}+\ddp{\FLPv}{t}. \end{equation*} We can write this symbolically—treating $\FLPnabla$ as a vector—as \begin{equation} \label{Eq:II:40:5} (\FLPv\cdot\FLPnabla)\FLPv+\ddp{\FLPv}{t}. \end{equation} Note that there can be an acceleration even though $\ddpl{\FLPv}{t}=\FLPzero$ so that velocity at a given point is not changing. As an example, water flowing in a circle at a constant speed is accelerating even though the velocity at a given point is not changing. The reason is, of course, that the velocity of a particular piece of water which is initially at one point on the circle has a different direction a moment later; there is a centripetal acceleration. The rest of our theory is just mathematical—finding solutions of the equation of motion we get by putting the acceleration (40.5) into Eq. (40.4). We get \begin{equation} \label{Eq:II:40:6} \ddp{\FLPv}{t}+(\FLPv\cdot\FLPnabla)\FLPv= -\frac{\FLPgrad{p}}{\rho}-\FLPgrad{\phi}, \end{equation} where viscosity has been omitted. We can rearrange this equation by using the following identity from vector analysis: \begin{equation*} (\FLPv\cdot\FLPnabla)\FLPv=(\FLPcurl{\FLPv})\times\FLPv+ \tfrac{1}{2}\FLPgrad{(\FLPv\cdot\FLPv)}. \end{equation*} If we now define a new vector field $\FLPOmega$, as the curl of $\FLPv$, \begin{equation} \label{Eq:II:40:7} \FLPOmega=\FLPcurl{\FLPv}, \end{equation} the vector identity can be written as \begin{equation*} (\FLPv\cdot\FLPnabla)\FLPv=\FLPOmega\times\FLPv+ \tfrac{1}{2}\FLPgrad{v^2}, \end{equation*} and our equation of motion (40.6) becomes \begin{equation} \label{Eq:II:40:8} \ddp{\FLPv}{t}+\FLPOmega\times\FLPv+ \frac{1}{2}\,\FLPgrad{v^2}= -\frac{\FLPgrad{p}}{\rho}-\FLPgrad{\phi}. \end{equation} You can verify that Eqs. (40.6) and (40.8) are equivalent by checking that the components of the two sides of the equation are equal—and making use of (40.7). The vector field $\FLPOmega$ is called the vorticity. If the vorticity is zero everywhere, we say that the flow is irrotational. We have already defined in Section 3-5 a thing called the circulation of a vector field. The circulation around any closed loop in a fluid is the line integral of the fluid velocity, at a given instant of time, around that loop: \begin{equation*} (\text{Circulation})=\oint\FLPv\cdot d\FLPs. \end{equation*} The circulation per unit area for an infinitesimal loop is then—using Stokes’ theorem—equal to $\FLPcurl{\FLPv}$. So the vorticity $\FLPOmega$ is the circulation around a unit area (perpendicular to the direction of $\FLPOmega$). It also follows that if you put a little piece of dirt—not an infinitesimal point—at any place in the liquid it will rotate with the angular velocity $\FLPOmega/2$. Try to see if you can prove that. You can also check it out that for a bucket of water on a turntable, $\FLPOmega$ is equal to twice the local angular velocity of the water. If we are interested only in the velocity field, we can eliminate the pressure from our equations. Taking the curl of both sides of Eq. (40.8), remembering that $\rho$ is a constant and that the curl of any gradient is zero, and using Eq. (40.3), we get \begin{equation} \label{Eq:II:40:9} \ddp{\FLPOmega}{t}+\FLPcurl{(\FLPOmega\times\FLPv)}=\FLPzero. \end{equation} This equation, together with the equations \begin{equation} \label{Eq:II:40:10} \FLPOmega=\FLPcurl{\FLPv} \end{equation} and \begin{equation} \label{Eq:II:40:11} \FLPdiv{\FLPv}=0, \end{equation} describes completely the velocity field $\FLPv$. Mathematically speaking, if we know $\FLPOmega$ at some time, then we know the curl of the velocity vector, and we also know that its divergence is zero, so given the physical situation we have all we need to determine $\FLPv$ everywhere. (It is just like the situation in magnetism where we had $\FLPdiv{\FLPB}=0$ and $\FLPcurl{\FLPB}=\FLPj/\epsO c^2$.) Thus, a given $\FLPOmega$ determines $\FLPv$ just as a given $\FLPj$ determines $\FLPB$. Then, knowing $\FLPv$, Eq. (40.9) tells us the rate of change of $\FLPOmega$ from which we can get the new $\FLPOmega$ for the next instant. Using Eq. (40.10), again we find the new $\FLPv$, and so on. You see how these equations contain all the machinery for calculating the flow. Note, however, that this procedure gives the velocity field only; we have lost all information about the pressure. We point out one special consequence of our equation. If $\FLPOmega=\FLPzero$ everywhere at any time $t$, $\ddpl{\FLPOmega}{t}$ also vanishes, so that $\FLPOmega$ is still zero everywhere at $t+\Delta t$. We have a solution to the equation; the flow is permanently irrotational. If a flow was started with zero rotation, it would always have zero rotation. The equations to be solved then are \begin{equation*} \FLPdiv{\FLPv}=0,\quad \FLPcurl{\FLPv}=\FLPzero. \end{equation*} They are just like the equations for the electrostatic or magnetostatic fields in free space. We will come back to them and look at some special problems later. |
|
2 | 40 | The Flow of Dry Water | 3 | Steady flow—Bernoulli’s theorem | Now we want to return to the equation of motion, Eq. (40.8), but limit ourselves to situations in which the flow is “steady.” By steady flow we mean that at any one place in the fluid the velocity never changes. The fluid at any point is always replaced by new fluid moving in exactly the same way. The velocity picture always looks the same—$\FLPv$ is a static vector field. In the same way that we drew “field lines” in magnetostatics, we can now draw lines which are always tangent to the fluid velocity as shown in Fig. 40-5. These lines are called streamlines. For steady flow, they are evidently the actual paths of fluid particles. (In unsteady flow the streamline pattern changes in time, and the streamline pattern at any instant does not represent the path of a fluid particle.) A steady flow does not mean that nothing is happening—atoms in the fluid are moving and changing their velocities. It only means that $\ddpl{\FLPv}{t}=\FLPzero$. Then if we take the dot product of $\FLPv$ into the equation of motion, the term $\FLPv\cdot(\FLPOmega\times\FLPv)$ drops out, and we are left with \begin{equation} \label{Eq:II:40:12} \FLPv\cdot\FLPgrad{\biggl\{ \frac{p}{\rho}+\phi+\frac{1}{2}\,v^2 \biggr\}}=0. \end{equation} This equation says that for a small displacement in the direction of the fluid velocity the quantity inside the brackets doesn’t change. Now in steady flow all displacements are along streamlines, so Eq. (40.12) tells us that for all the points along a streamline, we can write \begin{equation} \label{Eq:II:40:13} \frac{p}{\rho}+\frac{1}{2}\,v^2+\phi=\text{const}\:(\text{streamline}). \end{equation} This is Bernoulli’s theorem. The constant may in general be different for different streamlines; all we know is that the left-hand side of Eq. (40.13) is the same all along a given streamline. Incidentally, we may notice that for steady irrotational motion for which $\FLPOmega=\FLPzero$, the equation of motion (40.8) gives us the relation \begin{equation*} \FLPgrad{\biggl\{ \frac{p}{\rho}+\frac{1}{2}\,v^2+\phi \biggr\}}=0, \end{equation*} so that \begin{equation} \label{Eq:II:40:14} \frac{p}{\rho}+\frac{1}{2}\,v^2+\phi=\text{const}\:(\text{everywhere}). \end{equation} It’s just like Eq. (40.13) except that now the constant has the same value throughout the fluid. The theorem of Bernoulli is in fact nothing more than a statement of the conservation of energy. A conservation theorem such as this gives us a lot of information about a flow without our actually having to solve the detailed equations. Bernoulli’s theorem is so important and so simple that we would like to show you how it can be derived in a way that is different from the formal calculations we have just used. Imagine a bundle of adjacent streamlines which form a stream tube as sketched in Fig. 40-6. Since the walls of the tube consist of streamlines, no fluid flows out through the wall. Let’s call the area at one end of the stream tube $A_1$, the fluid velocity there $v_1$, the density of the fluid $\rho_1$, and the potential energy $\phi_1$. At the other end of the tube, we have the corresponding quantities $A_2$, $v_2$, $\rho_2$, and $\phi_2$. Now after a short interval of time $\Delta t$, the fluid at $A_1$ has moved a distance $v_1\,\Delta t$, and the fluid at $A_2$ has moved a distance $v_2\,\Delta t$ [Fig. 40-6(b)]. The conservation of mass requires that the mass which enters through $A_1$ must be equal to the mass which leaves through $A_2$. These masses at these two ends must be the same: \begin{equation*} \Delta M=\rho_1A_1v_1\,\Delta t=\rho_2A_2v_2\,\Delta t. \end{equation*} So we have the equality \begin{equation} \label{Eq:II:40:15} \rho_1A_1v_1=\rho_2A_2v_2. \end{equation} This equation tells us that the velocity varies inversely with the area of the stream tube if $\rho$ is constant. Now we calculate the work done by the fluid pressure. The work done on the fluid entering at $A_1$ is $p_1A_1v_1\,\Delta t$, and the work given up at $A_2$ is $p_2A_2v_2\,\Delta t$. The net work on the fluid between $A_1$ and $A_2$ is, therefore, \begin{equation*} p_1A_1v_1\,\Delta t-p_2A_2v_2\,\Delta t, \end{equation*} which must equal the increase in the energy of a mass $\Delta M$ of fluid in going from $A_1$ to $A_2$. In other words, \begin{equation} \label{Eq:II:40:16} p_1A_1v_1\,\Delta t-p_2A_2v_2\,\Delta t=\Delta M(E_2-E_1), \end{equation} where $E_1$ is the energy per unit mass of fluid at $A_1$, and $E_2$ is the energy per unit mass at $A_2$. The energy per unit mass of the fluid can be written as \begin{equation*} E=\tfrac{1}{2}v^2+\phi+U, \end{equation*} where $\tfrac{1}{2}v^2$ is the kinetic energy per unit mass, $\phi$ is the potential energy per unit mass, and $U$ is an additional term which represents the internal energy per unit mass of fluid. The internal energy might correspond, for example, to the thermal energy in a compressible fluid, or to chemical energy. All these quantities can vary from point to point. Using this form for the energies in (40.16), we have \begin{equation*} \frac{p_1A_1v_1\,\Delta t}{\Delta M}- \frac{p_2A_2v_2\,\Delta t}{\Delta M}= \frac{1}{2}\,v_2^2+\phi_2+U_2- \frac{1}{2}\,v_1^2-\phi_1-U_1. \end{equation*}
\begin{align*} \frac{p_1A_1v_1\,\Delta t}{\Delta M}&- \frac{p_2A_2v_2\,\Delta t}{\Delta M}\,=\\[1ex] \frac{1}{2}\,v_2^2+\phi_2+U_2&- \frac{1}{2}\,v_1^2-\phi_1-U_1. \end{align*} But we have seen that $\Delta M=\rho Av\,\Delta t$, so we get \begin{equation} \label{Eq:II:40:17} \frac{p_1}{\rho_1}+\frac{1}{2}\,v_1^2+\phi_1+U_1= \frac{p_2}{\rho_2}+\frac{1}{2}\,v_2^2+\phi_2+U_2, \end{equation}
\begin{equation} \begin{aligned} &\frac{p_1}{\rho_1}+\frac{1}{2}\,v_1^2+\phi_1+U_1\,=\\[1ex] &\frac{p_2}{\rho_2}+\frac{1}{2}\,v_2^2+\phi_2+U_2, \end{aligned} \label{Eq:II:40:17} \end{equation} which is the Bernoulli result with an additional term for the internal energy. If the fluid is incompressible, the internal energy term is the same on both sides, and we get again that Eq. (40.14) holds along any streamline. We consider now some simple examples in which the Bernoulli integral gives us a description of the flow. Suppose we have water flowing out of a hole near the bottom of a tank, as drawn in Fig. 40-7. We take a situation in which the flow speed $v_{\text{out}}$ at the hole is much larger than the flow speed near the top of the tank; in other words, we imagine that the diameter of the tank is so large that we can neglect the drop in the liquid level. (We could make a more accurate calculation if we wished.) At the top of the tank the pressure is $p_0$, the atmospheric pressure, and the pressure at the sides of the jet is also $p_0$. Now we write our Bernoulli equation for a streamline, such as the one shown in the figure. At the top of the tank, we take $v$ equal to zero and we also take the gravity potential $\phi$ to be zero. At the hole $v=v_{\text{out}}$ and $\phi=-gh$, so that \begin{equation*} p_0=p_0+\tfrac{1}{2}\rho v_{\text{out}}^2-\rho gh, \end{equation*} or \begin{equation} \label{Eq:II:40:18} v_{\text{out}}=\sqrt{2gh}. \end{equation} This velocity is just what we would get for something which falls the distance $h$. It is not too surprising, since the water at the exit gains kinetic energy at the expense of the potential energy of the water at the top. Do not get the idea, however, that you can figure out the rate that the fluid flows out of the tank by multiplying this velocity by the area of the hole. The fluid velocities as the jet leaves the hole are not all parallel to each other but have components inward toward the center of the stream—the jet is converging. After the jet has gone a little way, the contraction stops and the velocities do become parallel. So the total flow is the velocity times the area at that point. In fact, if we have a discharge opening which is just a round hole with a sharp edge, the jet contracts to $62$ percent of the area of the hole. The reduced effective area of the discharge varies for different shapes of discharge tubes, and experimental contractions are available as tables of efflux coefficients. If the discharge tube is re-entrant, as shown in Fig. 40-8, it is possible to prove in a most beautiful way that the efflux coefficient is exactly $50$ percent. We will give just a hint of how the proof goes. We have used the conservation of energy to get the velocity, Eq. (40.18), but there is also momentum conservation to consider. Since there is an outflow of momentum in the discharge jet, there must be a force applied over the cross section of the discharge tube. Where does the force come from? The force must come from the pressure on the walls. As long as the efflux hole is small and away from the walls, the fluid velocity near the walls of the tank will be very small. Therefore, the pressure on every face is almost exactly the same as the static pressure in a fluid at rest—from Eq. (40.14). Then the static pressure at any point on the side of the tank must be matched by an equal pressure at the point on the opposite wall, except at the points on the wall opposite the charge tube. If we calculate the momentum poured out through the jet by this pressure, we can show that the efflux coefficient is $1/2$. We cannot use this method for a discharge hole like that shown in Fig. 40-7, however, because the velocity increase along the wall right near the discharge area gives a pressure fall which we are not able to calculate. Let’s look at another example—a horizontal pipe with changing cross section, as shown in Fig. 40-9, with water flowing in one end and out the other. The conservation of energy, namely Bernoulli’s formula, says that the pressure is lower in the constricted area where the velocity is higher. We can easily demonstrate this effect by measuring the pressure at different cross sections with small vertical columns of water attached to the flow tube through holes small enough so that they do not disturb the flow. The pressure is then measured by the height of water in these vertical columns. The pressure is found to be less at the constriction than it is on either side. If the area beyond the constriction comes back to the same value it had before the constriction, the pressure rises again. Bernoulli’s formula would predict that the pressure downstream of the constriction should be the same as it was upstream, but actually it is noticeably less. The reason that our prediction is wrong is that we have neglected the frictional, viscous forces which cause a pressure drop along the tube. Despite this pressure drop the pressure is definitely lower at the constriction (because of the increased speed) than it is on either side of it—as predicted by Bernoulli. The speed $v_2$ must certainly exceed $v_1$ to get the same amount of water through the narrower tube. So the water accelerates in going from the wide to the narrow part. The force that gives this acceleration comes from the drop in pressure. We can check our results with another simple demonstration. Suppose we have on a tank a discharge tube which throws a jet of water upward as shown in Fig. 40-10. If the efflux velocity were exactly $\sqrt{2gh}$, the discharge water should rise to a level even with the surface of the water in the tank. Experimentally, it falls somewhat short. Our prediction is roughly right, but again viscous friction which has not been included in our energy conservation formula has resulted in a loss of energy. Have you ever held two pieces of paper close together and tried to blow them apart? Try it! They come together. The reason, of course, is that the air has a higher speed going through the constricted space between the sheets than it does when it gets outside. The pressure between the sheets is lower than atmospheric pressure, so they come together rather than separating. |
|
2 | 40 | The Flow of Dry Water | 4 | Circulation | We saw at the beginning of the last section that if we have an incompressible fluid with no circulation, the flow satisfies the following two equations: \begin{equation} \label{Eq:II:40:19} \FLPdiv{\FLPv}=0,\quad \FLPcurl{\FLPv}=\FLPzero. \end{equation} They are the same as the equations of electrostatics or magnetostatics in empty space. The divergence of the electric field is zero when there are no charges, and the curl of the electrostatic field is always zero. The curl of the magnetostatic field is zero if there are no currents, and the divergence of the magnetic field is always zero. Therefore, Eqs. (40.19) have the same solutions as the equations for $\FLPE$ in electrostatics or for $\FLPB$ in magnetostatics. As a matter of fact, we have already solved the problem of the flow of a fluid past a sphere, as an electrostatic analogy, in Section 12-5. The electrostatic analog is a uniform electric field plus a dipole field. The dipole field is so adjusted that the flow velocity normal to the surface of the sphere is zero. The same problem for the flow past a cylinder can be worked out in a similar way by using a suitable line dipole with a uniform flow field. This solution holds for a situation in which the fluid velocity at large distances is constant—both in magnitude and direction. The solution is sketched in Fig. 40-11(a). There is another solution for the flow around a cylinder when the conditions are such that the fluid at large distances moves in circles around the cylinder. The flow is, then, circular everywhere, as in Fig. 40-11(b). Such a flow has a circulation around the cylinder, although $\FLPcurl{\FLPv}$ is still zero in the fluid. How can there be circulation without a curl? We have a circulation around the cylinder because the line integral of $\FLPv$ around any loop enclosing the cylinder is not zero. At the same time, the line integral of $\FLPv$ around any closed path which does not include the cylinder is zero. We saw the same thing when we found the magnetic field around a wire. The curl of $\FLPB$ was zero outside of the wire, although a line integral of $\FLPB$ around a path which encloses the wire did not vanish. The velocity field in an irrotational circulation around a cylinder is precisely the same as the magnetic field around a wire. For a circular path with its center at the center of the cylinder, the line integral of the velocity is \begin{equation*} \oint\FLPv\cdot d\FLPs=2\pi rv. \end{equation*} For irrotational flow the integral must be independent of $r$. Let’s call the constant value $C$, then we have that \begin{equation} \label{Eq:II:40:20} v=\frac{C}{2\pi r}, \end{equation} where $v$ is the tangential velocity, and $r$ is the distance from the axis. There is a nice demonstration of a fluid circulating around a hole. You take a transparent cylindrical tank with a drain hole in the center of the bottom. You fill it with water, stir up some circulation with a stick, and pull the drain plug. You get the pretty effect shown in Fig. 40-12. (You’ve seen a similar thing many times in the bathtub!) Although you put in some $\omega$ at beginning, it soon dies down because of viscosity and the flow becomes irrotational—although still with some circulation around the hole. From the theory, we can calculate the shape of the inner surface of the water. As a particle of the water moves inward it picks up speed. From Eq. (40.20) the tangential velocity goes as $1/r$—it’s just from the conservation of angular momentum, like the skater pulling in her arms. Also the radial velocity goes as $1/r$. Ignoring the tangential motion, we have water going radially inward toward a hole; from $\FLPdiv{\FLPv}=0$, it follows that the radial velocity is proportional to $1/r$. So the total velocity also increases as $1/r$, and the water goes in along equiangular (or "logarithmic") spirals. The air-water surface is all at atmospheric pressure, so it must have—from Eq. (40.14)—the property that \begin{equation*} gz+\tfrac{1}{2}v^2=\text{const}. \end{equation*} But $v$ is proportional to $1/r$, so the shape of the surface is \begin{equation*} (z-z_0)=\frac{k}{r^2}. \end{equation*} An interesting point—which is not true in general but is true for incompressible, irrotational flow—is that if we have one solution and a second solution, then the sum is also a solution. This is true because the equations in (40.19) are linear. The complete equations of hydrodynamics, Eqs. (40.9), (40.10), and (40.11), are not linear, which makes a vast difference. For the irrotational flow about the cylinder, however, we can superpose the flow of Fig. 40-11(a) on the flow of Fig. 40-11(b) and get the new flow pattern shown in Fig. 40-11(c). This flow is of special interest. The flow velocity is higher on the upper side of the cylinder than on the lower side. The pressures are therefore lower on the upper side than on the lower side. So when we have a combination of a circulation around a cylinder and a net horizontal flow, there is a net vertical force on the cylinder—it is called a lift force. Of course, if there is no circulation, there is no net force on any body according to our theory of “dry” water. |
|
2 | 40 | The Flow of Dry Water | 5 | Vortex lines | We have already written down the general equations for the flow of an incompressible fluid when there may be vorticity. They are \begin{align*} \text{I.}&\quad\FLPdiv{\FLPv}=0,\\[1.5ex] \text{II.}&\quad\FLPOmega=\FLPcurl{\FLPv},\\[1ex] \text{III.}&\quad\ddp{\FLPOmega}{t}+ \FLPcurl{(\FLPOmega\times\FLPv)}=\FLPzero. \end{align*} The physical content of these equations has been described in words by Helmholtz in terms of three theorems. First, imagine that in the fluid we were to draw vortex lines rather than streamlines. By vortex lines we mean field lines that have the direction of $\FLPOmega$ and have a density in any region proportional to the magnitude of $\FLPOmega$. From II the divergence of $\FLPOmega$ is always zero (remember—Section 3-7—that the divergence of a curl is always zero). So vortex lines are like lines of $\FLPB$—they never start or stop, and will tend to go in closed loops. Now Helmholtz described III in words by the following statement: the vortex lines move with the fluid. This means that if you were to mark the fluid particles along some vortex lines—by coloring them with ink, for example—then as the fluid moves and carries those particles along, they will always mark the new positions of the vortex lines. In whatever way the atoms of the liquid move, the vortex lines move with them. That is one way to describe the laws. It also suggests a method for solving any problems. Given the initial flow pattern—say $\FLPv$ everywhere—then you can calculate $\FLPOmega$. From the $\FLPv$ you can also tell where the vortex lines are going to be a little later—they move with the speed $\FLPv$. With the new $\FLPOmega$ you can use I and II to find the new $\FLPv$. (That’s just like the problem of finding $\FLPB$, given the currents.) If we are given the flow pattern at one instant we can in principle calculate it for all subsequent times. We have the general solution for nonviscous flow. We would like to show how Helmholtz’s statement—and, therefore, III—can be at least partly understood. It is really just the law of conservation of angular momentum applied to the fluid. Suppose we imagine a small cylinder of the liquid whose axis is parallel to the vortex lines, as in Fig. 40-13(a). At some time later, this same piece of fluid will be somewhere else. Generally it will occupy a cylinder with a different diameter and be in a different place. It may also have a different orientation, say as in Fig. 40-13(b). If, however, the diameter has decreased as shown in Fig. 40-13, the length will have increased to keep the volume constant (since we are assuming an incompressible fluid). Also, since the vortex lines are stuck with the material, their density will go up as the cross-sectional area goes down. The product of the vorticity $\FLPOmega$ and area $A$ of the cylinder will remain constant, so according to Helmholtz, we should have \begin{equation} \label{Eq:II:40:21} \Omega_2A_2=\Omega_1A_1. \end{equation} Now notice that with zero viscosity all the forces on the surface of the cylindrical volume (or any volume, for that matter) are perpendicular to the surface. The pressure forces can cause the volume to be moved from place to place, or can cause it to change shape; but with no tangential forces the magnitude of the angular momentum of the material inside cannot change. The angular momentum of the liquid in the little cylinder is its moment of inertia $I$ times the angular velocity of the liquid, which is proportional to the vorticity $\Omega$. For a cylinder, the moment of inertia is proportional to $mr^2$. So from the conservation of angular momentum, we would conclude that \begin{equation*} (M_1R_1^2)\,\Omega_1=(M_2R_2^2)\,\Omega_2. \end{equation*} But the mass is the same, $M_1=M_2$, and the areas are proportional to $R^2$, so we get again just Eq. (40.21). Helmholtz’s statement—which is equivalent to III—is just a consequence of the fact that in the absence of viscosity the angular momentum of an element of the fluid cannot change. There is a nice demonstration of a moving vortex which is made with the simple apparatus of Fig. 40-14. It is a “drum” two feet in diameter and two feet long made by stretching a thick rubber sheet over the open end of a cylindrical “box.” The “bottom”—the drum is tipped on its side—is solid except for a $3$-inch diameter hole. If you give a sharp blow on the rubber diaphragm with your hand, a vortex ring is projected out of the hole. Although the vortex is invisible, you can tell it’s there because it will blow out a candle $10$ to $20$ feet away. By the delay in the effect, you can tell that “something” is travelling at a finite speed. You can see better what is going on if you first blow some smoke into the box. Then you see the vortex as a beautiful round “smoke ring.” The smoke ring is a torus-shaped bundle of vortex lines, as shown in Fig. 40-15(a). Since $\FLPOmega=\FLPcurl{\FLPv}$, these vortex lines represent also a circulation of $\FLPv$ as shown in part (b) of the figure. We can understand the forward motion of the ring in the following way: The circulating velocity around the bottom of the ring extends up to the top of the ring, having there a forward motion. Since the lines of $\FLPOmega$ move with the fluid, they also move ahead with the velocity $\FLPv$. (Of course, the circulation of $\FLPv$ around the top part of the ring is responsible for the forward motion of the vortex lines at the bottom.) We must now mention a serious difficulty. We have already noted that Eq. (40.9) says that, if $\FLPOmega$ is initially zero, it will always be zero. This result is a great failure of the theory of “dry” water, because it means that once $\FLPOmega$ is zero it is always zero—it is impossible to produce any vorticity under any circumstance. Yet, in our simple demonstration with the drum, we can generate a vortex ring starting with air which was initially at rest. (Certainly, $\FLPv=\FLPzero$, $\FLPOmega=\FLPzero$ everywhere in the box before we hit it.) Also, we all know that we can start some vorticity in a lake with a paddle. Clearly, we must go to a theory of “wet” water to get a complete understanding of the behavior of a fluid. Another feature of the dry water theory which is incorrect is the supposition we make regarding the flow at the boundary between it and the surface of a solid. When we discussed the flow past a cylinder—as in Fig. 40-11, for example—we permitted the fluid to slide along the surface of the solid. In our theory, the velocity at a solid surface could have any value depending on how it got started, and we did not consider any “friction” between the fluid and the solid. It is an experimental fact, however, that the velocity of a real fluid always goes to zero at the surface of a solid object. Therefore, our solution for the cylinder, with or without circulation, is wrong—as is our result regarding the generation of vorticity. We will tell you about the more correct theories in the next chapter. |
|
2 | 41 | The Flow of Wet Water | 1 | Viscosity | In the last chapter we discussed the behavior of water, disregarding the phenomenon of viscosity. Now we would like to discuss the phenomena of the flow of fluids, including the effects of viscosity. We want to look at the real behavior of fluids. We will describe qualitatively the actual behavior of the fluids under various different circumstances so that you will get some feel for the subject. Although you will see some complicated equations and hear about some complicated things, it is not our purpose that you should learn all these things. This is, in a sense, a “cultural” chapter which will give you some idea of the way the world is. There is only one item which is worth learning, and that is the simple definition of viscosity which we will come to in a moment. The rest is only for your entertainment. In the last chapter we found that the laws of motion of a fluid are contained in the equation \begin{equation} \label{Eq:II:41:1} \ddp{\FLPv}{t}+(\FLPv\cdot\FLPnabla)\FLPv= -\frac{\FLPgrad{p}}{\rho}-\FLPgrad{\phi}+ \frac{\FLPf_{\text{visc}}}{\rho}. \end{equation} In our “dry” water approximation we left out the last term, so we were neglecting all viscous effects. Also, we sometimes made an additional approximation by considering the fluid as incompressible; then we had the additional equation \begin{equation*} \FLPdiv{\FLPv}=0. \end{equation*} This last approximation is often quite good—particularly when flow speeds are much slower than the speed of sound. But in real fluids it is almost never true that we can neglect the internal friction that we call viscosity; most of the interesting things that happen come from it in one way or another. For example, we saw that in “dry” water the circulation never changes—if there is none to start out with, there will never be any. Yet, circulation in fluids is an everyday occurrence. We must fix up our theory. We begin with an important experimental fact. When we worked out the flow of “dry” water around or past a cylinder—the so-called “potential flow”—we had no reason not to permit the water to have a velocity tangent to the surface; only the normal component had to be zero. We took no account of the possibility that there might be a shear force between the liquid and the solid. It turns out—although it is not at all self-evident—that in all circumstances where it has been experimentally checked, the velocity of a fluid is exactly zero at the surface of a solid. You have noticed, no doubt, that the blade of a fan will collect a thin layer of dust—and that it is still there after the fan has been churning up the air. You can see the same effect even on the great fan of a wind tunnel. Why isn’t the dust blown off by the air? In spite of the fact that the fan blade is moving at high speed through the air, the speed of the air relative to the fan blade goes to zero right at the surface. So the very smallest dust particles are not disturbed.1 We must modify the theory to agree with the experimental fact that in all ordinary fluids, the molecules next to a solid surface have zero velocity (relative to the surface).2 We originally characterized a liquid by the fact that if you put a shearing stress on it—no matter how small—it would give way. It flows. In static situations, there are no shear stresses. But before equilibrium is reached—as long as you still push on it—there can be shear forces. Viscosity describes these shear forces which exist in a moving fluid. To get a measure of the shear forces during the motion of a fluid, we consider the following kind of experiment. Suppose that we have two solid plane surfaces with water between them, as in Fig. 41–1, and we keep one stationary while moving the other parallel to it at the slow speed $v_0$. If you measure the force required to keep the upper plate moving, you find that it is proportional to the area of the plates and to $v_0/d$, where $d$ is the distance between the plates. So the shear stress $F/A$ is proportional to $v_0/d$: \begin{equation*} \frac{F}{A}=\eta\,\frac{v_0}{d}. \end{equation*} The constant of proportionality $\eta$ is called the coefficient of viscosity. If we have a more complicated situation, we can always consider a little, flat, rectangular cell in the water with its faces parallel to the flow, as in Fig. 41–2. The shear force across this cell is given by \begin{equation} \label{Eq:II:41:2} \frac{\Delta F}{\Delta A}=\eta\,\frac{\Delta v_x}{\Delta y} =\eta\ddp{v_x}{y}. \end{equation} Now, $\ddpl{v_x}{y}$ is the rate of change of the shear strain we defined in Chapter 39, so for a liquid, the shear stress is proportional to the rate of change of the shear strain. In the general case we write \begin{equation} \label{Eq:II:41:3} S_{xy}=\eta\biggl(\ddp{v_y}{x}+\ddp{v_x}{y}\biggr). \end{equation} If there is a uniform rotation of the fluid, $\ddpl{v_x}{y}$ is the negative of $\ddpl{v_y}{x}$ and $S_{xy}$ is zero—as it should be since there are no stresses in a uniformly rotating fluid. (We did a similar thing in defining $e_{xy}$ in Chapter 39.) There are, of course, the corresponding expressions for $S_{yz}$ and $S_{zx}$. As an example of the application of these ideas, we consider the motion of a fluid between two coaxial cylinders. Let the inner one have the radius $a$ and the peripheral velocity $v_a$, and let the outer one have radius $b$ and velocity $v_b$. See Fig. 41–3. We might ask, what is the velocity distribution between the cylinders? To answer this question, we begin by finding a formula for the viscous shear in the fluid at a distance $r$ from the axis. From the symmetry of the problem, we can assume that the flow is always tangential and that its magnitude depends only on $r$; $v=v(r)$. If we watch a speck in the water at the radius $r$, its coordinates as a function of time are \begin{equation*} x=r\cos\omega t,\quad y=r\sin\omega t, \end{equation*} where $\omega=v/r$. Then the $x$- and $y$-components of velocity are \begin{equation} \label{Eq:II:41:4} v_x=-r\omega\sin\omega t=-\omega y\quad \text{and}\quad v_y=r\omega\cos\omega t=\omega x. \end{equation}
\begin{equation} \begin{alignedat}{3} v_x\;&=&\;-r\omega\sin\omega t\;&=&\;-\omega&y,\\[.5ex] v_y\;&=&\;r\omega\cos\omega t\;&=&\omega&x. \end{alignedat} \label{Eq:II:41:4} \end{equation} From Eq. (41.3), we have \begin{equation} \label{Eq:II:41:5} S_{xy}=\eta\biggl[\ddp{}{x}\,(x\omega)-\ddp{}{y}\,(y\omega)\biggl]= \eta\biggl[x\,\ddp{\omega}{x}-y\,\ddp{\omega}{y}\biggl]. \end{equation}
\begin{align} S_{xy}&=\eta\biggl[\ddp{}{x}\,(x\omega)-\ddp{}{y}\,(y\omega)\biggl]\notag\\[1.25ex] \label{Eq:II:41:5} &=\eta\biggl[x\,\ddp{\omega}{x}-y\,\ddp{\omega}{y}\biggl]. \end{align} For a point at $y=0$, $\ddpl{\omega}{y}=0$, and $x\,\ddpl{\omega}{x}$ is the same as $r\,d\omega/dr$. So at that point \begin{equation} \label{Eq:II:41:6} (S_{xy})_{y=0}=\eta r\,\ddt{\omega}{r}. \end{equation} (It is reasonable that $S$ should depend on $\ddpl{\omega}{r}$; when there is no change in $\omega$ with $r$, the liquid is in uniform rotation and there are no stresses.) The stress we have calculated is the tangential shear which is the same all around the cylinder. We can get the torque acting across a cylindrical surface at the radius $r$ by multiplying the shear stress by the moment arm $r$ and the area $2\pi rl$ (where $l$ is the length of the cylinder). We get \begin{equation} \label{Eq:II:41:7} \tau=2\pi r^2l(S_{xy})_{y=0}=2\pi\eta lr^3\,\ddt{\omega}{r}. \end{equation} Since the motion of the water is steady—there is no angular acceleration—the net torque on the cylindrical shell of water between $r$ and $r+dr$ must be zero; that is, the torque at $r$ must be balanced by an equal and opposite torque at $r+dr$, so that $\tau$ must be independent of $r$. In other words, $r^3\,d\omega/dr$ is equal to some constant, say $A$, and \begin{equation} \label{Eq:II:41:8} \ddt{\omega}{r}=\frac{A}{r^3}. \end{equation} Integrating, we find that $\omega$ varies with $r$ as \begin{equation} \label{Eq:II:41:9} \omega=-\frac{A}{2r^2}+B. \end{equation} The constants $A$ and $B$ are to be determined to fit the conditions that $\omega=\omega_a$ at $r=a$, and $\omega=\omega_b$ at $r=b$. We get that \begin{equation} \begin{aligned} A&=\frac{2a^2b^2}{b^2-a^2}\,(\omega_b-\omega_a),\\[1.5ex] B&=\frac{b^2\omega_b-a^2\omega_a}{b^2-a^2}. \end{aligned} \label{Eq:II:41:10} \end{equation} So we know $\omega$ as a function of $r$, and from it $v=\omega r$. If we want the torque, we can get it from Eqs. (41.7) and (41.8): \begin{equation} \tau=2\pi\eta lA\notag \end{equation} or \begin{equation} \label{Eq:II:41:11} \tau=\frac{4\pi\eta la^2b^2}{b^2-a^2}\,(\omega_b-\omega_a). \end{equation} It is proportional to the relative angular velocities of the two cylinders. One standard apparatus for measuring the coefficients of viscosity is built this way. One cylinder—say the outer one—is on pivots but is held stationary by a spring balance which measures the torque on it, while the inner one is rotated at a constant angular velocity. The coefficient of viscosity is then determined from Eq. (41.11). From its definition, you see that the units of $\eta$ are newton$\cdot$sec/m$^2$. For water at $20^\circ$C, \begin{equation*} \eta=10^{-3}\:\text{newton$\cdot$sec/m$^2$}. \end{equation*} It is usually more convenient to use the kinematic viscosity, which is $\eta$ divided by the density $\rho$. The values for water and air are then comparable: \begin{equation} \begin{aligned} \text{water at $20^\circ$C},&\quad \eta/\rho=10^{-6}\text{ m$^2$/sec},\\[.5ex] \text{air at $20^\circ$C},&\quad \eta/\rho=15\times10^{-6}\text{ m$^2$/sec}. \end{aligned} \label{Eq:II:41:12} \end{equation} Viscosities usually depend strongly on temperature. For instance, for water just above the freezing point, $\eta/\rho$ is $1.8$ times larger than it is at $20^\circ$C. |
|
2 | 41 | The Flow of Wet Water | 2 | Viscous flow | We now go to a general theory of viscous flow—at least in the most general form known to man. We already understand that the shear stress components are proportional to the spatial derivatives of the various velocity components such as $\ddpl{v_x}{y}$ or $\ddpl{v_y}{x}$. However, in the general case of a compressible fluid there is another term in the stress which depends on other derivatives of the velocity. The general expression is \begin{equation} \label{Eq:II:41:13} S_{ij}=\eta\biggl(\ddp{v_i}{x_j}+\ddp{v_j}{x_i}\biggr)+ \eta'\,\delta_{ij}(\FLPdiv{\FLPv}), \end{equation} where $x_i$ is any one of the rectangular coordinates $x$, $y$, or $z$, and $v_i$ is any one of the rectangular coordinates of the velocity. (The symbol $\delta_{ij}$ is the Kronecker delta which is $1$ when $i=j$ and $0$ for $i\neq j$.) The additional term adds $\eta'\,\FLPdiv{\FLPv}$ to all the diagonal elements $S_{ii}$ of the stress tensor. If the liquid is incompressible $\FLPdiv{\FLPv}=0$, and this extra term doesn’t appear. So it has to do with internal forces during compression. So two constants are required to describe the liquid, just as we had two constants to describe a homogeneous elastic solid. The coefficient $\eta$ is the “ordinary” coefficient of viscosity which we have already encountered. It is also called the first coefficient of viscosity or the “shear viscosity coefficient,” and the new coefficient $\eta'$ is called the second coefficient of viscosity. Now we want to determine the viscous force per unit volume, $\FLPf_{\text{visc}}$, so we can put it into Eq. (41.1) to get the equation of motion for a real fluid. The force on a small cubical volume element of a fluid is the resultant of the forces on all the six faces. Taking them two at a time, we will get differences that depend on the derivatives of the stresses, and, therefore, on the second derivatives of the velocity. This is nice because it will get us back to a vector equation. The component of the viscous force per unit volume in the direction of the rectangular coordinate $x_i$ is \begin{equation} \begin{aligned} (f_{\text{visc}})_i &=\sum_{j=1}^3\ddp{S_{ij}}{x_j}\\ &=\sum_{j=1}^3\ddp{}{x_j}\biggl\{ \eta\biggl(\ddp{v_i}{x_j}+\ddp{v_j}{x_i}\biggr) \biggr\}+ \ddp{}{x_i}\,(\eta'\,\FLPdiv{\FLPv}). \end{aligned} \label{Eq:II:41:14} \end{equation}
\begin{align} (f_{\text{visc}})_i =&\sum_{j=1}^3\ddp{S_{ij}}{x_j}\notag\\[.75ex] =&\sum_{j=1}^3\ddp{}{x_j}\biggl\{ \eta\biggl(\ddp{v_i}{x_j}+\ddp{v_j}{x_i}\biggr) \biggr\}\notag\\ \label{Eq:II:41:14} &+\;\;\ddp{}{x_i}\,(\eta'\,\FLPdiv{\FLPv}). \end{align} Usually, the variation of the viscosity coefficients with position is not significant and can be neglected. Then, the viscous force per unit volume contains only second derivatives of the velocity. We saw in Chapter 39 that the most general form of second derivatives that can occur in a vector equation is the sum of a term in the Laplacian ($\FLPdiv{\FLPnabla}\FLPv=\nabla^2\FLPv$), and a term in the gradient of the divergence $\bigl(\FLPgrad{(\FLPdiv{\FLPv})}\bigr)$. Equation (41.14) is just such a sum with the coefficients $\eta$ and $(\eta+\eta')$. We get \begin{equation} \label{Eq:II:41:15} \FLPf_{\text{visc}}=\eta\,\nabla^2\FLPv+(\eta+\eta')\, \FLPgrad{(\FLPdiv{\FLPv})}. \end{equation} In the incompressible case, $\FLPdiv{\FLPv}=0$, and the viscous force per unit volume is just $\eta\,\nabla^2\FLPv$. That is all that many people use; however, if you should want to calculate the absorption of sound in a fluid, you would need the second term. We can now complete our general equation of motion for a real fluid. Substituting Eq. (41.15) into Eq. (41.1), we get \begin{equation*} \rho\biggl\{\ddp{\FLPv}{t}+(\FLPv\cdot\FLPnabla)\FLPv\biggr\}= -\FLPgrad{p}-\rho\,\FLPgrad{\phi}+\eta\,\nabla^2\FLPv+ (\eta+\eta')\,\FLPgrad{(\FLPdiv{\FLPv})}. \end{equation*}
\begin{gather*} \rho\biggl\{\ddp{\FLPv}{t}+(\FLPv\cdot\FLPnabla)\FLPv\biggr\}=\\[1.25ex] -\FLPgrad{p}-\rho\,\FLPgrad{\phi}+\eta\,\nabla^2\FLPv+ (\eta+\eta')\,\FLPgrad{(\FLPdiv{\FLPv})}. \end{gather*} It’s complicated. But that’s the way nature is. If we introduce the vorticity $\FLPOmega=\FLPcurl{\FLPv}$, as we did before, we can write our equation as \begin{align} \rho\biggl\{\ddp{\FLPv}{t}+\FLPOmega\times\FLPv+ \frac{1}{2}\,\FLPgrad{v^2}\biggr\}= -\FLPgrad{p}&-\;\rho\,\FLPgrad{\phi}+\eta\,\nabla^2\FLPv\notag\\ &\kern{-0.5em}+\;(\eta+\eta')\,\FLPgrad{(\FLPdiv{\FLPv})}. \label{Eq:II:41:16} \end{align}
\begin{equation} \begin{gathered} \rho\biggl\{\ddp{\FLPv}{t}+\FLPOmega\times\FLPv+ \frac{1}{2}\,\FLPgrad{v^2}\biggr\}=\\[1.25ex] -\FLPgrad{p}-\rho\,\FLPgrad{\phi}+\eta\,\nabla^2\FLPv +(\eta+\eta')\,\FLPgrad{(\FLPdiv{\FLPv})}. \end{gathered} \label{Eq:II:41:16} \end{equation} We are supposing again that the only body forces acting are conservative forces like gravity. To see what the new term means, let’s look at the incompressible fluid case. Then, if we take the curl of Eq. (41.16), we get \begin{equation} \label{Eq:II:41:17} \ddp{\FLPOmega}{t}+\FLPcurl{(\FLPOmega\times\FLPv)}= \frac{\eta}{\rho}\,\nabla^2\FLPOmega. \end{equation} This is like Eq. (40.9) except for the new term on the right-hand side. When the right-hand side was zero, we had the Helmholtz theorem that the vorticity stays with the fluid. Now, we have the rather complicated nonzero term on the right-hand side which, however, has straightforward physical consequences. If we disregard for the moment the term $\FLPcurl{(\FLPOmega\times\FLPv)}$, we have a diffusion equation. The new term means that the vorticity $\FLPOmega$ diffuses through the fluid. If there is a large gradient in the vorticity, it will spread out into the neighboring fluid. This is the term that causes the smoke ring to get thicker as it goes along. Also, it shows up nicely if you send a “clean” vortex (a “smokeless” ring made by the apparatus described in the last chapter) through a cloud of smoke. When it comes out of the cloud, it will have picked up some smoke, and you will see a hollow shell of a smoke ring. Some of the $\FLPOmega$ diffuses outward into the smoke, while still maintaining its forward motion with the vortex. |
|
2 | 41 | The Flow of Wet Water | 3 | The Reynolds number | We will now describe the changes which are made in the character of fluid flow as a consequence of the new viscosity term. We will look at two problems in some detail. The first of these is the flow of a fluid past a cylinder—a flow which we tried to calculate in the previous chapter using the theory for nonviscous flow. It turns out that the viscous equations can be solved by man today only for a few special cases. So some of what we will tell you is based on experimental measurements—assuming that the experimental model satisfies Eq. (41.17). The mathematical problem is this: We would like the solution for the flow of an incompressible, viscous fluid past a long cylinder of diameter $D$. The flow should be given by Eq. (41.17) and by \begin{equation} \label{Eq:II:41:18} \FLPOmega=\FLPcurl{\FLPv} \end{equation} with the conditions that the velocity at large distances is some constant velocity, say $V$ (parallel to the $x$-axis), and at the surface of the cylinder is zero. That is, \begin{equation} \label{Eq:II:41:19} v_x=v_y=v_z=0 \end{equation} for \begin{equation} x^2+y^2=\frac{D^2}{4}.\notag \end{equation} That specifies completely the mathematical problem. If you look at the equations, you see that there are four different parameters to the problem: $\eta$, $\rho$, $D$, and $V$. You might think that we would have to give a whole series of cases for different $V$’s, different $D$’s, and so on. However, that is not the case. All the different possible solutions correspond to different values of one parameter. This is the most important general thing we can say about viscous flow. To see why this is so, notice first that the viscosity and density appear only in the ratio $\eta/\rho$—the kinematic viscosity. That reduces the number of independent parameters to three. Now suppose we measure all distances in the only length that appears in the problem, the diameter $D$ of the cylinder; that is, we substitute for $x$, $y$, $z$, the new variables $x'$, $y'$, $z'$ with \begin{equation*} x=x'D,\quad y=y'D,\quad z=z'D. \end{equation*} Then $D$ disappears from (41.19). In the same way, if we measure all velocities in terms of $V$—that is, we set $v=v'V$—we get rid of the $V$, and $v'$ is just equal to $1$ at large distances. Since we have fixed our units of length and velocity, our unit of time is now $D/V$; so we should set \begin{equation} \label{Eq:II:41:20} t=t'\,\frac{D}{V}. \end{equation} With our new variables, the derivatives in Eq. (41.18) get changed from $\ddpl{}{x}$ to $(1/D)\,\ddpl{}{x'}$, and so on; so Eq. (41.18) becomes \begin{equation} \label{Eq:II:41:21} \FLPOmega=\FLPcurl{\FLPv}=\frac{V}{D}\,\FLPnabla'\times\FLPv'= \frac{V}{D}\,\FLPOmega'. \end{equation} Our main equation (41.17) then reads \begin{equation*} \ddp{\FLPOmega'}{t'}+\FLPnabla'\times(\FLPOmega'\times\FLPv')= \frac{\eta}{\rho VD}\,\nabla'^2\FLPOmega'. \end{equation*} All the constants condense into one factor which we write, following tradition, as $1/\ReynoldsR$: \begin{equation} \label{Eq:II:41:22} \ReynoldsR=\frac{\rho}{\eta}\,VD. \end{equation} If we just remember that all of our equations are to be written with all quantities in the new units, we can omit all the primes. Our equations for the flow are then \begin{equation} \label{Eq:II:41:23} \ddp{\FLPOmega}{t}+\FLPcurl{(\FLPOmega\times\FLPv)}= \frac{1}{\ReynoldsR}\,\nabla^2\FLPOmega. \end{equation} and \begin{equation} \FLPOmega=\FLPcurl{\FLPv}\notag \end{equation} with the conditions \begin{equation} \FLPv=\FLPzero\notag \end{equation} for \begin{equation} \label{Eq:II:41:24} x^2+y^2=1/4 \end{equation} and \begin{equation} v_x=1,\quad v_y=v_z=0\notag \end{equation} for \begin{equation} x^2+y^2+z^2\gg1.\notag \end{equation} What this all means physically is very interesting. It means, for example, that if we solve the problem of the flow for one velocity $V_1$ and a certain cylinder diameter $D_1$, and then ask about the flow for a different diameter $D_2$ and a different fluid, the flow will be the same for the velocity $V_2$ which gives the same Reynolds number—that is, when \begin{equation} \label{Eq:II:41:25} \ReynoldsR_1=\frac{\rho_1}{\eta_1}\,V_1D_1= \ReynoldsR_2=\frac{\rho_2}{\eta_2}\,V_2D_2. \end{equation} For any two situations which have the same Reynolds number, the flows will “look” the same—in terms of the appropriate scaled $x'$, $y'$, $z'$, and $t'$. This is an important proposition because it means that we can determine what the behavior of the flow of air past an airplane wing will be without having to build an airplane and try it. We can, instead, make a model and make measurements using a velocity that gives the same Reynolds number. This is the principle which allows us to apply the results of “wind-tunnel” measurements on small-scale airplanes, or “model-basin” results on scale model boats, to the full-scale objects. Remember, however, that we can only do this provided the compressibility of the fluid can be neglected. Otherwise, a new quantity enters—the speed of sound. And different situations will really correspond to each other only if the ratio of $V$ to the sound speed is also the same. This latter ratio is called the Mach number. So, for velocities near the speed of sound or above, the flows are the same in two situations if both the Mach number and the Reynolds number are the same for both situations. |
|
2 | 41 | The Flow of Wet Water | 4 | Flow past a circular cylinder | Let’s go back to the problem of low-speed (nearly incompressible) flow over the cylinder. We will give a qualitative description of the flow of a real fluid. There are many things we might want to know about such a flow—for instance, what is the drag force on the cylinder? The drag force on a cylinder is plotted in Fig. 41–4 as a function of $\ReynoldsR$—which is proportional to the air speed $V$ if everything else is held fixed. What is actually plotted is the so-called drag coefficient $C_D$, which is a dimensionless number equal to the force divided by $\tfrac{1}{2}\rho V^2Dl$, where $D$ is the diameter, $l$ is the length of the cylinder, and $\rho$ is the density of the liquid: \begin{equation*} C_D=\frac{F}{\tfrac{1}{2}\rho V^2Dl}. \end{equation*} The coefficient of drag varies in a rather complicated way, giving us a pre-hint that something rather interesting and complicated is happening in the flow. We will now describe the nature of flow for the different ranges of the Reynolds number. First, when the Reynolds number is very small, the flow is quite steady; that is, the velocity is constant at any place, and the flow goes around the cylinder. The actual distribution of the flow lines is, however, not like it is in potential flow. They are solutions of a somewhat different equation. When the velocity is very low or, what is equivalent, when the viscosity is very high so the stuff is like honey, then the inertial terms are negligible and the flow is described by the equation \begin{equation*} \nabla^2\FLPOmega=\FLPzero. \end{equation*} This equation was first solved by Stokes. He also solved the same problem for a sphere. If you have a small sphere moving under such conditions of low Reynolds number, the force needed to drag it is equal to $6\pi\eta aV$, where $a$ is the radius of the sphere and $V$ is its velocity. This is a very useful formula because it tells the speed at which tiny grains of dirt (or other particles which can be approximated as spheres) move through a fluid under a given force—as, for instance, in a centrifuge, or in sedimentation, or diffusion. In the low Reynolds number region—for $\ReynoldsR$ less than $1$—the lines of $\FLPv$ around a cylinder are as drawn in Fig. 41–5. If we now increase the fluid speed to get a Reynolds number somewhat greater than $1$, we find that the flow is different. There is a circulation behind the sphere, as shown in Fig. 41–6(b). It is still an open question as to whether there is always a circulation there even at the smallest Reynolds number or whether things suddenly change at a certain Reynolds number. It used to be thought that the circulation grew continuously. But it is now thought that it appears suddenly, and it is certain that the circulation increases with $\ReynoldsR$. In any case, there is a different character to the flow for $\ReynoldsR$ in the region from about $10$ to $30$. There is a pair of vortices behind the cylinder. The flow changes again by the time we get to a number of $40$ or so. There is suddenly a complete change in the character of the motion. What happens is that one of the vortices behind the cylinder gets so long that it breaks off and travels downstream with the fluid. Then the fluid curls around behind the cylinder and makes a new vortex. The vortices peel off alternately on each side, so an instantaneous view of the flow looks roughly as sketched in Fig. 41–6(c). The stream of vortices is called a “Kármán vortex street.” They always appear for $\ReynoldsR>40$. We show a photograph of such a flow in Fig. 41–7. The difference between the two flows in Fig. 41–6(c) and 41–6(b) or 41–6(a) is almost a complete difference in regime. In Fig. 41–6(a) or (b), the velocity is constant, whereas in Fig. 41–6(c), the velocity at any point varies with time. There is no steady solution above $\ReynoldsR=40$—which we have marked on Fig. 41–4 by a dashed line. For these higher Reynolds numbers, the flow varies with time but in a regular, cyclic fashion. We can get a physical idea of how these vortices are produced. We know that the fluid velocity must be zero at the surface of the cylinder and that it also increases rapidly away from that surface. Vorticity is created by this large local variation in fluid velocity. Now when the main stream velocity is low enough, there is sufficient time for this vorticity to diffuse out of the thin region near the solid surface where it is produced and to grow into a large region of vorticity. This physical picture should help to prepare us for the next change in the nature of the flow as the main stream velocity, or $\ReynoldsR$, is increased still more. As the velocity gets higher and higher, there is less and less time for the vorticity to diffuse into a larger region of fluid. By the time we reach a Reynolds number of several hundred, the vorticity begins to fill in a thin band, as shown in Fig. 41–6(d). In this layer the flow is chaotic and irregular. The region is called the boundary layer and this irregular flow region works its way farther and farther upstream as $\ReynoldsR$ is increased. In the turbulent region, the velocities are very irregular and “noisy”; also the flow is no longer two-dimensional but twists and turns in all three dimensions. There is still a regular alternating motion superimposed on the turbulent one. As the Reynolds number is increased further, the turbulent region works its way forward until it reaches the point where the flow lines leave the cylinder—for flows somewhat above $\ReynoldsR=10^5$. The flow is as shown in Fig. 41–6(e), and we have what is called a “turbulent boundary layer.” Also, there is a drastic change in the drag force; it drops by a large factor, as shown in Fig. 41–4. In this speed region, the drag force actually decreases with increasing speed. There seems to be little evidence of periodicity. What happens for still larger Reynolds numbers? As we increase the speed further, the wake increases in size again and the drag increases. The latest experiments—which go up to $\ReynoldsR=10^7$ or so—indicate that a new periodicity appears in the wake, either because the whole wake is oscillating back and forth in a gross motion or because some new kind of vortex is occurring together with an irregular noisy motion. The details are as yet not entirely clear, and are still being studied experimentally. |
|
2 | 41 | The Flow of Wet Water | 5 | The limit of zero viscosity | We would like to point out that none of the flows we have described are anything like the potential flow solution we found in the preceding chapter. This is, at first sight, quite surprising. After all, $\ReynoldsR$ is proportional to $1/\eta$. So $\eta$ going to zero is equivalent to $\ReynoldsR$ going to infinity. And if we take the limit of large $\ReynoldsR$ in Eq. (41.23), we get rid of the right-hand side and get just the equations of the last chapter. Yet, you would find it hard to believe that the highly turbulent flow at $\ReynoldsR=10^7$ was approaching the smooth flow computed from the equations of “dry” water. How can it be that as we approach $\ReynoldsR=\infty$, the flow described by Eq. (41.23) gives a completely different solution from the one we obtained taking $\eta=0$ to start out with? The answer is very interesting. Note that the right-hand term of Eq. (41.23) has $1/\ReynoldsR$ times a second derivative. It is a higher derivative than any other derivative in the equation. What happens is that although the coefficient $1/\ReynoldsR$ is small, there are very rapid variations of $\FLPOmega$ in the space near the surface. These rapid variations compensate for the small coefficient, and the product does not go to zero with increasing $\ReynoldsR$. The solutions do not approach the limiting case as the coefficient of $\nabla^2\FLPOmega$ goes to zero. You may be wondering, “What is the fine-grain turbulence and how does it maintain itself? How can the vorticity which is made somewhere at the edge of the cylinder generate so much noise in the background?” The answer is again interesting. Vorticity has a tendency to amplify itself. If we forget for a moment about the diffusion of vorticity which causes a loss, the laws of flow say (as we have seen) that the vortex lines are carried along with the fluid, at the velocity $\FLPv$. We can imagine a certain number of lines of $\FLPOmega$ which are being distorted and twisted by the complicated flow pattern of $\FLPv$. This pulls the lines closer together and mixes them all up. Lines that were simple before will get knotted and pulled close together. They will be longer and tighter together. The strength of the vorticity will increase and its irregularities—the pluses and minuses—will, in general, increase. So the magnitude of vorticity in three dimensions increases as we twist the fluid about. You might well ask, “When is the potential flow a satisfactory theory at all?” In the first place, it is satisfactory outside the turbulent region where the vorticity has not entered appreciably by diffusion. By making special streamlined bodies, we can keep the turbulent region as small as possible; the flow around airplane wings—which are carefully designed—is almost entirely true potential flow. |
|
2 | 41 | The Flow of Wet Water | 6 | Couette flow | It is possible to demonstrate that the complex and shifting character of the flow past a cylinder is not special but that the great variety of flow possibilities occurs generally. We have worked out in Section 41–1 a solution for the viscous flow between two cylinders, and we can compare the results with what actually happens. If we take two concentric cylinders with an oil in the space between them and put a fine aluminum powder as a suspension in the oil, the flow is easy to see. Now if we turn the outer cylinder slowly, nothing unexpected happens; see Fig. 41–8(a). Alternatively, if we turn the inner cylinder slowly, nothing very striking occurs. However, if we turn the inner cylinder at a higher rate, we get a surprise. The fluid breaks into horizontal bands, as indicated in Fig. 41–8(b). When the outer cylinder rotates at a similar rate with the inner one at rest, no such effect occurs. How can it be that there is a difference between rotating the inner or the out cylinder? After all, the flow pattern we derived in Section 41–1 depended only on $\omega_b-\omega_a$. We can get the answer by looking at the cross sections shown in Fig. 41–9. When the inner layers of the fluid are moving more rapidly than the outer ones, they tend to move outward—the centrifugal force is larger than the pressure holding them in place. A whole layer cannot move out uniformly because the outer layers are in the way. It must break into cells and circulate, as shown in Fig. 41–9(b). It is like the convection currents in a room which has hot air at the bottom. When the inner cylinder is at rest and the outer cylinder has a high velocity, the centrifugal forces build up a pressure gradient which keeps everything in equilibrium—see Fig. 41–9(c) (as in a room with hot air at the top). Now let’s speed up the inner cylinder. At first, the number of bands increases. Then suddenly you see the bands become wavy, as in Fig. 41–8(c), and the waves travel around the cylinder. The speed of these waves is easily measured. For high rotation speeds they approach $1/3$ the speed of the inner cylinder. And no one knows why! There’s a challenge. A simple number like $1/3$, and no explanation. In fact, the whole mechanism of the wave formation is not very well understood; yet it is steady laminar flow. If we now start rotating the outer cylinder also—but in the opposite direction—the flow pattern starts to break up. We get wavy regions alternating with apparently quiet regions, as sketched in Fig. 41–8(d), making a spiral pattern. In these “quiet” regions, however, we can see that the flow is really quite irregular; it is, in fact completely turbulent. The wavy regions also begin to show irregular turbulent flow. If the cylinders are rotated still more rapidly, the whole flow becomes chaotically turbulent. In this simple experiment we see many interesting regimes of flow which are quite different, and yet which are all contained in our simple equation for various values of the one parameter $\ReynoldsR$. With our rotating cylinders, we can see many of the effects which occur in the flow past a cylinder: first, there is a steady flow; second, a flow sets in which varies in time but in a regular, smooth way; finally, the flow becomes completely irregular. You have all seen the same effects in the column of smoke rising from a cigarette in quiet air. There is a smooth steady column followed by a series of twistings as the stream of smoke begins to break up, ending finally in an irregular churning cloud of smoke. The main lesson to be learned from all of this is that a tremendous variety of behavior is hidden in the simple set of equations in (41.23). All the solutions are for the same equations, only with different values of $\ReynoldsR$. We have no reason to think that there are any terms missing from these equations. The only difficulty is that we do not have the mathematical power today to analyze them except for very small Reynolds numbers—that is, in the completely viscous case. That we have written an equation does not remove from the flow of fluids its charm or mystery or its surprise. If such variety is possible in a simple equation with only one parameter, how much more is possible with more complex equations! Perhaps the fundamental equation that describes the swirling nebulae and the condensing, revolving, and exploding stars and galaxies is just a simple equation for the hydrodynamic behavior of nearly pure hydrogen gas. Often, people in some unjustified fear of physics say you can’t write an equation for life. Well, perhaps we can. As a matter of fact, we very possibly already have the equation to a sufficient approximation when we write the equation of quantum mechanics: \begin{equation*} H\psi=-\frac{\hbar}{i}\,\ddp{\psi}{t}. \end{equation*} We have just seen that the complexities of things can so easily and dramatically escape the simplicity of the equations which describe them. Unaware of the scope of simple equations, man has often concluded that nothing short of God, not mere equations, is required to explain the complexities of the world. We have written the equations of water flow. From experiment, we find a set of concepts and approximations to use to discuss the solution—vortex streets, turbulent wakes, boundary layers. When we have similar equations in a less familiar situation, and one for which we cannot yet experiment, we try to solve the equations in a primitive, halting, and confused way to try to determine what new qualitative features may come out, or what new qualitative forms are a consequence of the equations. Our equations for the sun, for example, as a ball of hydrogen gas, describe a sun without sunspots, without the rice-grain structure of the surface, without prominences, without coronas. Yet, all of these are really in the equations; we just haven’t found the way to get them out. There are those who are going to be disappointed when no life is found on other planets. Not I—I want to be reminded and delighted and surprised once again, through interplanetary exploration, with the infinite variety and novelty of phenomena that can be generated from such simple principles. The test of science is its ability to predict. Had you never visited the earth, could you predict the thunderstorms, the volcanos, the ocean waves, the auroras, and the colorful sunset? A salutary lesson it will be when we learn of all that goes on on each of those dead planets—those eight or ten balls, each agglomerated from the same dust cloud and each obeying exactly the same laws of physics. The next great era of awakening of human intellect may well produce a method of understanding the qualitative content of equations. Today we cannot. Today we cannot see that the water flow equations contain such things as the barber pole structure of turbulence that one sees between rotating cylinders. Today we cannot see whether Schrödinger’s equation contains frogs, musical composers, or morality—or whether it does not. We cannot say whether something beyond it like God is needed, or not. And so we can all hold strong opinions either way. |
|
2 | 42 | Curved Space | 1 | Curved spaces with two dimensions | According to Newton everything attracts everything else with a force inversely proportional to the square of the distance from it, and objects respond to forces with accelerations proportional to the forces. They are Newton’s laws of universal gravitation and of motion. As you know, they account for the motions of balls, planets, satellites, galaxies, and so forth. Einstein had a different interpretation of the law of gravitation. According to him, space and time—which must be put together as space-time—are curved near heavy masses. And it is the attempt of things to go along “straight lines” in this curved space-time which makes them move the way they do. Now that is a complex idea—very complex. It is the idea we want to explain in this chapter. Our subject has three parts. One involves the effects of gravitation. Another involves the ideas of space-time which we already studied. The third involves the idea of curved space-time. We will simplify our subject in the beginning by not worrying about gravity and by leaving out the time—discussing just curved space. We will talk later about the other parts, but we will concentrate now on the idea of curved space—what is meant by curved space, and, more specifically, what is meant by curved space in this application of Einstein. Now even that much turns out to be somewhat difficult in three dimensions. So we will first reduce the problem still further and talk about what is meant by the words “curved space” in two dimensions. In order to understand this idea of curved space in two dimensions you really have to appreciate the limited point of view of the character who lives in such a space. Suppose we imagine a bug with no eyes who lives on a plane, as shown in Fig. 42–1. He can move only on the plane, and he has no way of knowing that there is anyway to discover any “outside world.” (He hasn’t got your imagination.) We are, of course, going to argue by analogy. We live in a three-dimensional world, and we don’t have any imagination about going off our three-dimensional world in a new direction; so we have to think the thing out by analogy. It is as though we were bugs living on a plane, and there was a space in another direction. That’s why we will first work with the bug, remembering that he must live on his surface and can’t get out. As another example of a bug living in two dimensions, let’s imagine one who lives on a sphere. We imagine that he can walk around on the surface of the sphere, as in Fig. 42–2 but that he can’t look “up,” or “down,” or “out.” Now we want to consider still a third kind of creature. He is also a bug like the others, and also lives on a plane, as our first bug did, but this time the plane is peculiar. The temperature is different at different places. Also, the bug and any rulers he uses are all made of the same material which expands when it is heated. Whenever he puts a ruler somewhere to measure something the ruler expands immediately to the proper length for the temperature at that place. Wherever he puts any object—himself, a ruler, a triangle, or anything—the thing stretches itself because of the thermal expansion. Everything is longer in the hot places than it is in the cold places, and everything has the same coefficient of expansion. We will call the home of our third bug a “hot plate,” although we will particularly want to think of a special kind of hot plate that is cold in the center and gets hotter as we go out toward the edges (Fig. 42–3). Now we are going to imagine that our bugs begin to study geometry. Although we imagine that they are blind so that they can’t see any “outside” world, they can do a lot with their legs and feelers. They can draw lines, and they can make rulers, and measure off lengths. First, let’s suppose that they start with the simplest idea in geometry. They learn how to make a straight line—defined as the shortest line between two points. Our first bug—see Fig. 42–4—learns to make very good lines. But what happens to the bug on the sphere? He draws his straight line as the shortest distance—for him—between two points, as in Fig. 42–5. It may look like a curve to us, but he has no way of getting off the sphere and finding out that there is “really” a shorter line. He just knows that if he tries any other path in his world it is always longer than his straight line. So we will let him have his straight line as the shortest arc between two points. (It is, of course an arc of a great circle.) Finally, our third bug—the one in Fig. 42–3—will also draw “straight lines” that look like curves to us. For instance, the shortest distance between $A$ and $B$ in Fig. 42–6 would be on a curve like the one shown. Why? Because when his line curves out toward the warmer parts of his hot plate, the rulers get longer (from our omniscient point of view) and it takes fewer “yardsticks” laid end-to-end to get from $A$ to $B$. So for him the line is straight—he has no way of knowing that there could be someone out in a strange three-dimensional world who would call a different line “straight.” We think you get the idea now that all the rest of the analysis will always be from the point of view of the creatures on the particular surfaces and not from our point of view. With that in mind let’s see what the rest of their geometries looks like. Let’s assume that the bugs have all learned how to make two lines intersect at right angles. (You can figure out how they could do it.) Then our first bug (the one on the normal plane) finds an interesting fact. If he starts at the point $A$ and makes a line $100$ inches long, then makes a right angle and marks off another $100$ inches, then makes another right angle and goes another $100$ inches, then makes a third right angle and a fourth line $100$ inches long, he ends up right at the starting point as shown in Fig. 42–7(a). It is a property of his world—one of the facts of his “geometry.” Then he discovers another interesting thing. If he makes a triangle—a figure with three straight lines—the sum of the angles is equal to $180^\circ$, that is, to the sum of two right angles. See Fig. 42–7(b). Then he invents the circle. What’s a circle? A circle is made this way: You rush off on straight lines in many many directions from a single point, and lay out a lot of dots that are all the same distance from that point. See Fig. 42–7(c). (We have to be careful how we define these things because we’ve got to be able to make the analogs for the other fellows.) Of course, its equivalent to the curve you can make by swinging a ruler around a point. Anyway, our bug learns how to make circles. Then one day he thinks of measuring the distance around a circle. He measures several circles and finds a neat relationship: The distance around is always the same number times the radius $r$ (which is, of course, the distance from the center out to the curve). The circumference and the radius always have the same ratio—approximately $6.283$—independent of the size of the circle. Now let’s see what our other bugs have been finding out about their geometries. First, what happens to the bug on the sphere when he tries to make a “square”? If he follows the prescription we gave above, he would probably think that the result was hardly worth the trouble. He gets a figure like the one shown in Fig. 42–8. His endpoint $B$ isn’t on top of the starting point $A$. It doesn’t work out to a closed figure at all. Get a sphere and try it. A similar thing would happen to our friend on the hot plate. If he lays out four straight lines of equal length—as measured with his expanding rulers—joined by right angles he gets a picture like the one in Fig. 42–9. Now suppose that our bugs had each had their own Euclid who had told them what geometry “should” be like, and that they had checked him out roughly by making crude measurements on a small scale. Then as they tried to make accurate squares on a larger scale they would discover that something was wrong. The point is, that just by geometrical measurements they would discover that something was the matter with their space. We define a curved space to be a space in which the geometry is not what we expect for a plane. The geometry of the bugs on the sphere or on the hot plate is the geometry of a curved space. The rules of Euclidean geometry fail. And it isn’t necessary to be able to lift yourself out of the plane in order to find out that the world that you live in is curved. It isn’t necessary to circumnavigate the globe in order to find out that it is a ball. You can find out that you live on a ball by laying out a square. If the square is very small you will need a lot of accuracy, but if the square is large the measurement can be done more crudely. Let’s take the case of a triangle on a plane. The sum of the angles is $180$ degrees. Our friend on the sphere can find triangles that are very peculiar. He can, for example, find triangles which have three right angles. Yes indeed! One is shown in Fig. 42–10. Suppose our bug starts at the north pole and makes a straight line all the way down to the equator. Then he makes a right angle and another perfect straight line the same length. Then he does it again. For the very special length he has chosen he gets right back to his starting point, and also meets the first line with a right angle. So there is no doubt that for him this triangle has three right angles, or $270$ degrees in the sum. It turns out that for him the sum of the angles of the triangle is always greater than $180$ degrees. In fact, the excess (for the special case shown, the extra $90$ degrees) is proportional to how much area the triangle has. If a triangle on a sphere is very small, its angles add up to very nearly $180$ degrees, only a little bit over. As the triangle gets bigger the discrepancy goes up. The bugs on the hot plate would discover similar difficulties with their triangles. Let’s look next at what our other bugs find out about circles. They make circles and measure their circumferences. For example, the bug on the sphere might make a circle like the one shown in Fig. 42–11. And he would discover that the circumference is less than $2\pi$ times the radius. (You can see that because from the wisdom of our three-dimensional view it is obvious that what he calls the “radius” is a curve which is longer than the true radius of the circle.) Suppose that the bug on the sphere had read Euclid, and decided to predict a radius by dividing the circumference $C$ by $2\pi$, taking \begin{equation} \label{Eq:II:42:1} r_{\text{pred}}=\frac{C}{2\pi}. \end{equation} Then he would find that the measured radius was larger than the predicted radius. Pursuing the subject, he might define the difference to be the “excess radius,” and write \begin{equation} \label{Eq:II:42:2} r_{\text{meas}}-r_{\text{pred}}=r_{\text{excess}}, \end{equation} and study how the excess radius effect depended on the size of the circle. Our bug on the hot plate would discover a similar phenomenon. Suppose he was to draw a circle centered at the cold spot on the plate as in Fig. 42–12. If we were to watch him as he makes the circle we would notice that his rulers are short near the center and get longer as they are moved outward—although the bug doesn’t know it, of course. When he measures the circumference the ruler is long all the time, so he, too, finds out that the measured radius is longer than the predicted radius, $C/2\pi$. The hot-plate bug also finds an “excess radius effect.” And again the size of the effect depends on the radius of the circle. We will define a “curved space” as one in which these types of geometrical errors occur: The sum of the angles of a triangle is different from $180$ degrees; the circumference of a circle divided by $2\pi$ is not equal to the radius; the rule for making a square doesn’t give a closed figure. You can think of others. We have given two different examples of curved space: the sphere and the hot plate. But it is interesting that if we choose the right temperature variation as a function of distance on the hot plate, the two geometries will be exactly the same. It is rather amusing. We can make the bug on the hot plate get exactly the same answers as the bug on the ball. For those who like geometry and geometrical problems we’ll tell you how it can be done. If you assume that the length of the rulers (as determined by the temperature) goes in proportion to one plus some constant times the square of the distance away from the origin, then you will find that the geometry of that hot plate is exactly the same in all details1 as the geometry of the sphere. There are, of course, other kinds of geometry. We could ask about the geometry of a bug who lived on a pear, namely something which has a sharper curvature in one place and a weaker curvature in the other place, so that the excess in angles in triangles is more severe when he makes little triangles in one part of his world than when he makes them in another part. In other words, the curvature of a space can vary from place to place. That’s just a generalization of the idea. It can also be imitated by a suitable distribution of temperature on a hot plate. We may also point out that the results could come out with the opposite kind of discrepancies. You could find out, for example, that all triangles when they are made too large have the sum of their angles less than $180$ degrees. That may sound impossible, but it isn’t at all. First of all, we could have a hot plate with the temperature decreasing with the distance from the center. Then all the effects would be reversed. But we can also do it purely geometrically by looking at the two-dimensional geometry of the surface of a saddle. Imagine a saddle-shaped surface like the one sketched in Fig. 42–13. Now draw a “circle” on the surface, defined as the locus of all points the same distance from a center. This circle is a curve that oscillates up and down with a scallop effect. So its circumference is larger than you would expect from calculating $2\pi r_{\text{meas}}$. So $C/2\pi$ is now greater than $r_{\text{meas}}$. The “excess radius” would be negative. Spheres and pears and such are all surfaces of positive curvatures; and the others are called surfaces of negative curvature. In general, a two-dimensional world will have a curvature which varies from place to place and may be positive in some places and negative in other places. In general, we mean by a curved space simply one in which the rules of Euclidean geometry break down with one sign of discrepancy or the other. The amount of curvature—defined, say, by the excess radius—may vary from place to place. We might point out that, from our definition of curvature, a cylinder is, surprisingly enough, not curved. If a bug lived on a cylinder, as shown in Fig. 42–14, he would find out that triangles, squares, and circles would all have the same behavior they have on a plane. This is easy to see, by just thinking about how all the figures will look if the cylinder is unrolled onto a plane. Then all the geometrical figures can be made to correspond exactly to the way they are in a plane. So there is no way for a bug living on a cylinder (assuming that he doesn’t go all the way around, but just makes local measurements) to discover that his space is curved. In our technical sense, then, we consider that his space is not curved. What we want to talk about is more precisely called intrinsic curvature; that is, a curvature which can be found by measurements only in a local region. (A cylinder has no intrinsic curvature.) This was the sense intended by Einstein when he said that our space is curved. But we as yet only have defined a curved space in two dimensions; we must go onward to see what the idea might mean in three dimensions. |
|
2 | 42 | Curved Space | 2 | Curvature in three-dimensional space | We live in three-dimensional space and we are going to consider the idea that three-dimensional space is curved. You say, “But how can you imagine it being bent in any direction?” Well, we can’t imagine space being bent in any direction because our imagination isn’t good enough. (Perhaps it’s just as well that we can’t imagine too much, so that we don’t get too free of the real world.) But we can still define a curvature without getting out of our three-dimensional world. All we have been talking about in two dimensions was simply an exercise to show how we could get a definition of curvature which didn’t require that we be able to “look in” from the outside. We can determine whether our world is curved or not in a way quite analogous to the one used by the gentlemen who live on the sphere and on the hot plate. We may not be able to distinguish between two such cases but we certainly can distinguish those cases from the flat space, the ordinary plane. How? Easy enough: We lay out a triangle and measure the angles. Or we make a great big circle and measure the circumference and the radius. Or we try to lay out some accurate squares, or try to make a cube. In each case we test whether the laws of geometry work. If they don’t work, we say that our space is curved. If we lay out a big triangle and the sum of its angles exceeds $180$ degrees, we can say our space is curved. Or if the measured radius of a circle is not equal to its circumference over $2\pi$, we can say our space is curved. You will notice that in three dimensions the situation can be much more complicated than in two. At any one place in two dimensions there is a certain amount of curvature. But in three dimensions there can be several components to the curvature. If we lay out a triangle in some plane, we may get a different answer than if we orient the plane of the triangle in a different way. Or take the example of a circle. Suppose we draw a circle and measure the radius and it doesn’t check with $C/2\pi$ so that there is some excess radius. Now we draw another circle at right angles—as in Fig. 42–15. There’s no need for the excess to be exactly the same for both circles. In fact, there might be a positive excess for a circle in one plane, and a defect (negative excess) for a circle in the other plane. Perhaps you are thinking of a better idea: Can’t we get around all of these components by using a sphere in three dimensions? We can specify a sphere by taking all the points that are the same distance from a given point in space. Then we can measure the surface area by laying out a fine scale rectangular grid on the surface of the sphere and adding up all the bits of area. According to Euclid the total area $A$ is supposed to be $4\pi$ times the square of the radius; so we can define a “predicted radius” as $\sqrt{A/4\pi}$. But we can also measure the radius directly by digging a hole to the center and measuring the distance. Again, we can take the measured radius minus the predicted radius and call the difference the radius excess, \begin{equation*} r_{\text{excess}}=r_{\text{meas}}-\biggl( \frac{\text{measured area}}{4\pi}\biggr)^{1/2}, \end{equation*} which would be a perfectly satisfactory measure of the curvature. It has the great advantage that it doesn’t depend upon how we orient a triangle or a circle. But the excess radius of a sphere also has a disadvantage; it doesn’t completely characterize the space. It gives what is called the mean curvature of the three-dimensional world, since there is an averaging effect over the various curvatures. Since it is an average, however, it does not solve completely the problem of defining the geometry. If you know only this number you can’t predict all properties of the geometry of the space, because you can’t tell what would happen with circles of different orientation. The complete definition requires the specification of six “curvature numbers” at each point. Of course the mathematicians know how to write all those numbers. You can read someday in a mathematics book how to write them all in a high-class and elegant form, but it is first a good idea to know in a rough way what it is that you are trying to write about. For most of our purposes the average curvature will be enough.2 |
|
2 | 42 | Curved Space | 3 | Our space is curved | Now comes the main question. Is it true? That is, is the actual physical three-dimensional space we live in curved? Once we have enough imagination to realize the possibility that space might be curved, the human mind naturally gets curious about whether the real world is curved or not. People have made direct geometrical measurements to try to find out, and haven’t found any deviations. On the other hand, by arguments about gravitation, Einstein discovered that space is curved, and we’d like to tell you what Einstein’s law is for the amount of curvature, and also tell you a little bit about how he found out about it. Einstein said that space is curved and that matter is the source of the curvature. (Matter is also the source of gravitation, so gravity is related to the curvature—but that will come later in the chapter.) Let us suppose, to make things a little easier, that the matter is distributed continuously with some density, which may vary, however, as much as you want from place to place.3 The rule that Einstein gave for the curvature is the following: If there is a region of space with matter in it and we take a sphere small enough that the density $\rho$ of matter inside it is effectively constant, then the radius excess for the sphere is proportional to the mass inside the sphere. Using the definition of excess radius, we have \begin{equation} \label{Eq:II:42:3} \text{Radius excess}= r_{\text{meas}}-\sqrt{\frac{A}{4\pi}} =\frac{G}{3c^2}\cdot M. \end{equation} Here, $G$ is the gravitational constant (of Newton’s theory), $c$ is the velocity of light, and $M=4\pi\rho r^3/3$ is the mass of the matter inside the sphere. This is Einstein’s law for the mean curvature of space. Suppose we take the earth as an example and forget that the density varies from point to point—so we won’t have to do any integrals. Suppose we were to measure the surface of the earth very carefully, and then dig a hole to the center and measure the radius. From the surface area we could calculate the predicted radius we would get from setting the area equal to $4\pi r^2$. When we compared the predicted radius with the actual radius, we would find that the actual radius exceeded the predicted radius by the amount given in Eq. (42.3). The constant $G/3c^2$ is about $2.5\times10^{-29}$ cm per gram, so for each gram of material the measured radius is off by $2.5\times10^{-29}$ cm. Putting in the mass of the earth, which is about $6\times10^{27}$ grams, it turns out that the earth has $1.5$ millimeters more radius than it should have for its surface area.4 Doing the same calculation for the sun, you find that the sun’s radius is one-half a kilometer too long. You should note that the law says that the average curvature above the surface area of the earth is zero. But that does not mean that all the components of the curvature are zero. There may still be—and, in fact, there is—some curvature above the earth. For a circle in a plane there will be an excess radius of one sign for some orientations and of the opposite sign for other orientations. It just turns out that the average over a sphere is zero when there is no mass inside it. Incidentally, it turns out that there is a relation between the various components of the curvature and the variation of the average curvature from place to place. So if you know the average curvature everywhere, you can figure out the details of the curvature components at each place. The average curvature inside the earth varies with altitude, and this means that some curvature components are nonzero both inside the earth and outside. It is that curvature that we see as a gravitational force. Suppose we have a bug on a plane, and suppose that the “plane” has little pimples in the surface. Wherever there is a pimple the bug would conclude that his space had little local regions of curvature. We have the same thing in three dimensions. Wherever there is a lump of matter, our three-dimensional space has a local curvature—a kind of three-dimensional pimple. If we make a lot of bumps on a plane there might be an overall curvature besides all the pimples—the surface might become like a ball. It would be interesting to know whether our space has a net average curvature as well as the local pimples due to the lumps of matter like the earth and the sun. The astrophysicists have been trying to answer that question by making measurements of galaxies at very large distances. For example, if the number of galaxies we see in a spherical shell at a large distance is different from what we would expect from our knowledge of the radius of the shell, we would have a measure of the excess radius of a tremendously large sphere. From such measurements it is hoped to find out whether our whole universe is flat on the average, or round—whether it is “closed,” like a sphere, or “open” like a plane. You may have heard about the debates that are going on about this subject. There are debates because the astronomical measurements are still completely inconclusive; the experimental data are not precise enough to give a definite answer. Unfortunately, we don’t have the slightest idea about the overall curvature of our universe on a large scale. |
|
2 | 42 | Curved Space | 4 | Geometry in space-time | Now we have to talk about time. As you know from the special theory of relativity, measurements of space and measurements of time are interrelated. And it would be kind of crazy to have something happening to the space, without the time being involved in the same thing. You will remember that the measurement of time depends on the speed at which you move. For instance, if we watch a guy going by in a spaceship we see that things happen more slowly for him than for us. Let’s say he takes off on a trip and returns in $100$ seconds flat by our watches; his watch might say that he had been gone for only $95$ seconds. In comparison with ours, his watch—and all other processes, like his heart beat—have been running slow. Now let’s consider an interesting problem. Suppose you are the one in the spaceship. We ask you to start off at a given signal and return to your starting place just in time to catch a later signal—at, say, exactly $100$ seconds later according to our clock. And you are also asked to make the trip in such a way that your watch will show the longest possible elapsed time. How should you move? You should stand still. If you move at all your watch will read less than $100$ sec when you get back. Suppose, however, we change the problem a little. Suppose we ask you to start at point $A$ on a given signal and go to point $B$ (both fixed relative to us), and to do it in such a way that you arrive back just at the time of a second signal (say $100$ seconds later according to our fixed clock). Again you are asked to make the trip in the way that lets you arrive with the latest possible reading on your watch. How would you do it? For which path and schedule will your watch show the greatest elapsed time when you arrive? The answer is that you will spend the longest time from your point of view if you make the trip by going at a uniform speed along a straight line. Reason: Any extra motions and any extra-high speeds will make your clock go slower. (Since the time deviations depend on the square of the velocity, what you lose by going extra fast at one place you can never make up by going extra slowly in another place.) The point of all this is that we can use the idea to define “a straight line” in space-time. The analog of a straight line in space is for space-time a motion at uniform velocity in a constant direction. The curve of shortest distance in space corresponds in space-time not to the path of shortest time, but to the one of longest time, because of the funny things that happen to signs of the $t$-terms in relativity. “Straight-line” motion—the analog of “uniform velocity along a straight line”—is then that motion which takes a watch from one place at one time to another place at another time in the way that gives the longest time reading for the watch. This will be our definition for the analog of a straight line in space-time. |
|
2 | 42 | Curved Space | 5 | Gravity and the principle of equivalence | Now we are ready to discuss the laws of gravitation. Einstein was trying to generate a theory of gravitation that would fit with the relativity theory that he had developed earlier. He was struggling along until he latched onto one important principle which guided him into getting the correct laws. That principle is based on the idea that when a thing is falling freely everything inside it seems weightless. For example, a satellite in orbit is falling freely in the earth’s gravity, and an astronaut in it feels weightless. This idea, when stated with greater precision, is called Einstein’s principle of equivalence. It depends on the fact that all objects fall with exactly the same acceleration no matter what their mass, or what they are made of. If we have a spaceship that is “coasting”—so it’s in a free fall—and there is a man inside, then the laws governing the fall of the man and the ship are the same. So if he puts himself in the middle of the ship he will stay there. He doesn’t fall with respect to the ship. That’s what we mean when we say he is “weightless.” Now suppose you are in a rocket ship which is accelerating. Accelerating with respect to what? Let’s just say that its engines are on and generating a thrust so that it is not coasting in a free fall. Also imagine that you are way out in empty space so that there are practically no gravitational forces on the ship. If the ship is accelerating with “$1$ g” you will be able to stand on the “floor” and will feel your normal weight. Also if you let go of a ball, it will “fall” toward the floor. Why? Because the ship is accelerating “upward,” but the ball has no forces on it, so it will not accelerate; it will get left behind. Inside the ship the ball will appear to have a downward acceleration of “$1$ g.” Now let’s compare that with the situation in a spaceship sitting at rest on the surface of the earth. Everything is the same! You would be pressed toward the floor, a ball would fall with an acceleration of $1$ g, and so on. In fact, how could you tell inside a space ship whether you are sitting on the earth or are accelerating in free space? According to Einstein’s equivalence principle there is no way to tell if you only make measurements of what happens to things inside! To be strictly correct, that is true only for one point inside the ship. The gravitational field of the earth is not precisely uniform, so a freely falling ball has a slightly different acceleration at different places—the direction changes and the magnitude changes. But if we imagine a strictly uniform gravitational field, it is completely imitated in every respect by a system with a constant acceleration. That is the basis of the principle of equivalence. |
|
2 | 42 | Curved Space | 6 | The speed of clocks in a gravitational field | Now we want to use the principle of equivalence for figuring out a strange thing that happens in a gravitational field. We’ll show you something that happens in a rocket ship which you probably wouldn’t have expected to happen in a gravitational field. Suppose we put a clock at the “head” of the rocket ship—that is, at the “front” end—and we put another identical clock at the “tail,” as in Fig. 42–16. Let’s call the two clocks $A$ and $B$. If we compare these two clocks when the ship is accelerating, the clock at the head seems to run fast relative to the one at the tail. To see that, imagine that the front clock emits a flash of light each second, and that you are sitting at the tail comparing the arrival of the light flashes with the ticks of clock $B$. Let’s say that the rocket is in the position $a$ of Fig. 42–17 when clock $A$ emits a flash, and at the position $b$ when the flash arrives at clock $B$. Later on the ship will be at position $c$ when the clock $A$ emits its next flash, and at position $d$ when you see it arrive at clock $B$. The first flash travels the distance $L_1$ and the second flash travels the shorter distance $L_2$. It is a shorter distance because the ship is accelerating and has a higher speed at the time of the second flash. You can see, then, that if the two flashes were emitted from clock $A$ one second apart, they would arrive at clock $B$ with a separation somewhat less than one second, since the second flash doesn’t spend as much time on the way. The same thing will also happen for all the later flashes. So if you were sitting in the tail you would conclude that clock $A$ was running faster than clock $B$. If you were to do the same thing in reverse—letting clock $B$ emit light and observing it at clock $A$—you would conclude that $B$ was running slower than $A$. Everything fits together and there is nothing mysterious about it all. But now let’s think of the rocket ship at rest in the earth’s gravity. The same thing happens. If you sit on the floor with one clock and watch another one which is sitting on a high shelf, it will appear to run faster than the one on the floor! You say, “But that is wrong. The times should be the same. With no acceleration there’s no reason for the clocks to appear to be out of step.” But they must if the principle of equivalence is right. And Einstein insisted that the principle was right, and went courageously and correctly ahead. He proposed that clocks at different places in a gravitational field must appear to run at different speeds. But if one always appears to be running at a different speed with respect to the other, then so far as the first is concerned the other is running at a different rate. But now you see we have the analog for clocks of the hot ruler we were talking about earlier, when we had the bug on a hot plate. We imagined that rulers and bugs and everything changed lengths in the same way at various temperatures so they could never tell that their measuring sticks were changing as they moved around on the hot plate. It’s the same with clocks in a gravitational field. Every clock we put at a higher level is seen to go faster. Heartbeats go faster, all processes run faster. If they didn’t you would be able to tell the difference between a gravitational field and an accelerating reference system. The idea that time can vary from place to place is a difficult one, but it is the idea Einstein used, and it is correct—believe it or not. Using the principle of equivalence we can figure out how much the speed of a clock changes with height in a gravitational field. We just work out the apparent discrepancy between the two clocks in the accelerating rocket ship. The easiest way to do this is to use the result we found in Chapter 34 of Vol. I for the Doppler effect. There, we found—see Eq. (34.14)—that if $v$ is the relative velocity of a source and a receiver, the received frequency $\omega$ is related to the emitted frequency $\omega_0$ by \begin{equation} \label{Eq:II:42:4} \omega=\omega_0\,\frac{1+v/c}{\sqrt{1-v^2/c^2}}. \end{equation} Now if we think of the accelerating rocket ship in Fig. 42–17 the emitter and receiver are moving with equal velocities at any one instant. But in the time that it takes the light signals to go from clock $A$ to clock $B$ the ship has accelerated. It has, in fact, picked up the additional velocity $gt$, where $g$ is the acceleration and $t$ is time it takes light to travel the distance $H$ from $A$ to $B$. This time is very nearly $H/c$. So when the signals arrive at $B$, the ship has increased its velocity by $gH/c$. The receiver always has this velocity with respect to the emitter at the instant the signal left it. So this is the velocity we should use in the Doppler shift formula, Eq. (42.4). Assuming that the acceleration and the length of the ship are small enough that this velocity is much smaller than $c$, we can neglect the term in $v^2/c^2$. We have that \begin{equation} \label{Eq:II:42:5} \omega=\omega_0\biggl(1+\frac{gH}{c^2}\biggr). \end{equation} So for the two clocks in the spaceship we have the relation \begin{equation} \label{Eq:II:42:6} (\text{Rate at the receiver})= (\text{Rate of emission}) \biggl(1+\frac{gH}{c^2}\biggr), \end{equation}
\begin{equation} \label{Eq:II:42:6} \begin{pmatrix} \text{Rate}\\[-.75ex] \text{at the}\\[-.75ex] \text{receiver} \end{pmatrix}= \begin{pmatrix} \text{Rate of}\\[-.75ex] \text{emission} \end{pmatrix} \!\biggl(\!1+\frac{gH}{c^2}\biggr), \end{equation} where $H$ is the height of the emitter above the receiver. From the equivalence principle the same result must hold for two clocks separated by the height $H$ in a gravitational field with the free fall acceleration $g$. This is such an important idea we would like to demonstrate that it also follows from another law of physics—from the conservation of energy. We know that the gravitational force on an object is proportional to its mass $M$, which is related to its total internal energy $E$ by $M=E/c^2$. For instance, the masses of nuclei determined from the energies of nuclear reactions which transmute one nucleus into another agree with the masses obtained from atomic weights. Now think of an atom which has a lowest energy state of total energy $E_0$ and a higher energy state $E_1$, and which can go from the state $E_1$ to the state $E_0$ by emitting light. The frequency $\omega$ of the light will be given by \begin{equation} \label{Eq:II:42:7} \hbar\omega=E_1-E_0. \end{equation} Now suppose we have such an atom in the state $E_1$ sitting on the floor, and we carry it from the floor to the height $H$. To do that we must do some work in carrying the mass $m_1=E_1/c^2$ up against the gravitational force. The amount of work done is \begin{equation} \label{Eq:II:42:8} \frac{E_1}{c^2}\,gH. \end{equation} Then we let the atom emit a photon and go into the lower energy state $E_0$. Afterward we carry the atom back to the floor. On the return trip the mass is $E_0/c^2$; we get back the energy \begin{equation} \label{Eq:II:42:9} \frac{E_0}{c^2}\,gH, \end{equation} so we have done a net amount of work equal to \begin{equation} \label{Eq:II:42:10} \Delta U=\frac{E_1-E_0}{c^2}\,gH. \end{equation} When the atom emitted the photon it gave up the energy $E_1-E_0$. Now suppose that the photon happened to go down to the floor and be absorbed. How much energy would it deliver there? You might at first think that it would deliver just the energy $E_1-E_0$. But that can’t be right if energy is conserved, as you can see from the following argument. We started with the energy $E_1$ at the floor. When we finish, the energy at the floor level is the energy $E_0$ of the atom in its lower state plus the energy $E_{\text{ph}}$ received from the photon. In the meantime we have had to supply the additional energy $\Delta U$ of Eq. (42.10). If energy is conserved, the energy we end up with at the floor must be greater than we started with by just the work we have done. Namely, we must have that \begin{equation*} E_{\text{ph}}+E_0=E_1+\Delta U, \end{equation*} or \begin{equation} \label{Eq:II:42:11} E_{\text{ph}}=(E_1-E_0)+\Delta U. \end{equation} It must be that the photon does not arrive at the floor with just the energy $E_1-E_0$ it started with, but with a little more energy. Otherwise some energy would have been lost. If we substitute in Eq. (42.11) the $\Delta U$ we got in Eq. (42.10) we get that the photon arrives at the floor with the energy \begin{equation} \label{Eq:II:42:12} E_{\text{ph}}=(E_1-E_0)\biggl(1+\frac{gH}{c^2}\biggr). \end{equation} But a photon of energy $E_{\text{ph}}$ has the frequency $\omega=E_{\text{ph}}/\hbar$. Calling the frequency of the emitted photon $\omega_0$—which is by Eq. (42.7) equal to $(E_1-E_0)/\hbar$—our result in Eq. (42.12) gives again the relation of (42.5) between the frequency of the photon when it is absorbed on the floor and the frequency with which it was emitted. The same result can be obtained in still another way. A photon of frequency $\omega_0$ has the energy $E_0=\hbar\omega_0$. Since the energy $E_0$ has the relativistic mass $E_0/c^2$ the photon has a mass (not rest mass) $\hbar\omega_0/c^2$, and is “attracted” by the earth. In falling the distance $H$ it will gain an additional energy $(\hbar\omega_0/c^2)gH$, so it arrives with the energy \begin{equation*} E=\hbar\omega_0\biggl(1+\frac{gH}{c^2}\biggr). \end{equation*} But its frequency after the fall is $E/\hbar$, giving again the result in Eq. (42.5). Our ideas about relativity, quantum physics, and energy conservation all fit together only if Einstein’s predictions about clocks in a gravitational field are right. The frequency changes we are talking about are normally very small. For instance, for an altitude difference of $20$ meters at the earth’s surface the frequency difference is only about two parts in $10^{15}$. However, just such a change has recently been found experimentally using the Mössbauer effect.5 Einstein was perfectly correct. |
|
2 | 42 | Curved Space | 7 | The curvature of space-time | Now we want to relate what we have just been talking about to the idea of curved space-time. We have already pointed out that if the time goes at different rates in different places, it is analogous to the curved space of the hot plate. But it is more than an analogy; it means that space-time is curved. Let’s try to do some geometry in space-time. That may at first sound peculiar, but we have often made diagrams of space-time with distance plotted along one axis and time along the other. Suppose we try to make a rectangle in space-time. We begin by plotting a graph of height $H$ versus $t$ as in Fig. 42–18(a). To make the base of our rectangle we take an object which is at rest at the height $H_1$ and follow its world line for $100$ seconds. We get the line $BD$ in part (b) of the figure which is parallel to the $t$-axis. Now let’s take another object which is $100$ feet above the first one at $t=0$. It starts at the point $A$ in Fig. 42–18(c). Now we follow its world line for $100$ seconds as measured by a clock at $A$. The object goes from $A$ to $C$, as shown in part (d) of the figure. But notice that since time goes at a different rate at the two heights—we are assuming that there is a gravitational field—the two points $C$ and $D$ are not simultaneous. If we try to complete the square by drawing a line to the point $C'$ which is $100$ feet above $D$ at the same time, as in Fig. 42–18(e), the pieces don’t fit. And that’s what we mean when we say that space-time is curved. |
|
2 | 42 | Curved Space | 8 | Motion in curved space-time | Let’s consider an interesting little puzzle. We have two identical clocks, $A$ and $B$, sitting together on the surface of the earth as in Fig. 42–19. Now we lift clock $A$ to some height $H$, hold it there awhile, and return it to the ground so that it arrives at just the instant when clock $B$ has advanced by $100$ seconds. Then clock $A$ will read something like $107$ seconds, because it was running faster when it was up in the air. Now here is the puzzle. How should we move clock $A$ so that it reads the latest possible time—always assuming that it returns when $B$ reads $100$ seconds? You say, “That’s easy. Just take $A$ as high as you can. Then it will run as fast as possible, and be the latest when you return.” Wrong. You forgot something—we’ve only got $100$ seconds to go up and back. If we go very high, we have to go very fast to get there and back in $100$ seconds. And you mustn’t forget the effect of special relativity which causes moving clocks to slow down by the factor $\sqrt{1-v^2/c^2}$. This relativity effect works in the direction of making clock $A$ read less time than clock $B$. You see that we have a kind of game. If we stand still with clock $A$ we get $100$ seconds. If we go up slowly to a small height and come down slowly we can get a little more than $100$ seconds. If we go a little higher, maybe we can gain a little more. But if we go too high we have to move fast to get there, and we may slow down the clock enough that we end up with less than $100$ seconds. What program of height versus time—how high to go and with what speed to get there, carefully adjusted to bring us back to clock $B$ when it has increased by $100$ seconds—will give us the largest possible time reading on clock $A$? Answer: Find out how fast you have to throw a ball up into the air so that it will fall back to earth in exactly $100$ seconds. The ball’s motion—rising fast, slowing down, stopping, and coming back down—is exactly the right motion to make the time the maximum on a wrist watch strapped to the ball. Now consider a slightly different game. We have two points $A$ and $B$ both on the earth’s surface at some distance from one another. We play the same game that we did earlier to find what we call the straight line. We ask how we should go from $A$ to $B$ so that the time on our moving watch will be the longest—assuming we start at $A$ on a given signal and arrive at $B$ on another signal at $B$ which we will say is $100$ seconds later by a fixed clock. Now you say, “Well we found out before that the thing to do is to coast along a straight line at a uniform speed chosen so that we arrive at $B$ exactly $100$ seconds later. If we don’t go along a straight line it takes more speed, and our watch is slowed down.” But wait! That was before we took gravity into account. Isn’t it better to curve upward a little bit and then come down? Then during part of the time we are higher up and our watch will run a little faster? It is, indeed. If you solve the mathematical problem of adjusting the curve of the motion so that the elapsed time of the moving watch is the most it can possibly be, you will find that the motion is a parabola—the same curve followed by something that moves on a free ballistic path in the gravitational field, as in Fig. 42–19. Therefore the law of motion in a gravitational field can also be stated: An object always moves from one place to another so that a clock carried on it gives a longer time than it would on any other possible trajectory—with, of course, the same starting and finishing conditions. The time measured by a moving clock is often called its “proper time.” In free fall, the trajectory makes the proper time of an object a maximum. Let’s see how this all works out. We begin with Eq. (42.5) which says that the excess rate of the moving watch is \begin{equation} \label{Eq:II:42:13} \frac{\omega_0gH}{c^2}. \end{equation} Besides this, we have to remember that there is a correction of the opposite sign for the speed. For this effect we know that \begin{equation*} \omega=\omega_0\sqrt{1-v^2/c^2}. \end{equation*} Although the principle is valid for any speed, we take an example in which the speeds are always much less than $c$. Then we can write this equation as \begin{equation*} \omega=\omega_0(1-v^2/2c^2), \end{equation*} and the defect in the rate of our clock is \begin{equation} \label{Eq:II:42:14} -\omega_0\,\frac{v^2}{2c^2}. \end{equation} Combining the two terms in (42.13) and (42.14) we have that \begin{equation} \label{Eq:II:42:15} \Delta\omega=\frac{\omega_0}{c^2}\biggl(gH-\frac{v^2}{2}\biggr). \end{equation} Such a frequency shift of our moving clock means that if we measure a time $dt$ on a fixed clock, the moving clock will register the time \begin{equation} \label{Eq:II:42:16} dt\biggl[ 1+\biggl(\frac{gH}{c^2}-\frac{v^2}{2c^2}\biggr) \biggr], \end{equation} The total time excess over the trajectory is the integral of the extra term with respect to time, namely \begin{equation} \label{Eq:II:42:17} \frac{1}{c^2}\int\biggl(gH-\frac{v^2}{2}\biggr)\,dt, \end{equation} which is supposed to be a maximum. The term $gH$ is just the gravitational potential $\phi$. Suppose we multiply the whole thing by a constant factor $-mc^2$, where $m$ is the mass of the object. The constant won’t change the condition for the maximum, but the minus sign will just change the maximum to a minimum. Equation (42.16) then says that the object will move so that \begin{equation} \label{Eq:II:42:18} \int\biggl(\frac{mv^2}{2}-m\phi\biggr)\,dt= \text{a minimum}. \end{equation} But now the integrand is just the difference of the kinetic and potential energies. And if you look in Chapter 19 of Volume II you will see that when we discussed the principle of least action we showed that Newton’s laws for an object in any potential could be written exactly in the form of Eq. (42.18). |
|
2 | 42 | Curved Space | 9 | Einstein’s theory of gravitation | Einstein’s form of the equations of motion—that the proper time should be a maximum in curved space-time—gives the same results as Newton’s laws for low velocities. As he was circling around the earth, Gordon Cooper’s watch was reading later than it would have in any other path you could have imagined for his satellite.6 So the law of gravitation can be stated in terms of the ideas of the geometry of space-time in this remarkable way. The particles always take the longest proper time—in space-time a quantity analogous to the “shortest distance.” That’s the law of motion in a gravitational field. The great advantage of putting it this way is that the law doesn’t depend on any coordinates, or any other way of defining the situation. Now let’s summarize what we have done. We have given you two laws for gravity: Those two laws correspond to similar pairs of laws we have seen earlier. We originally described motion in a gravitational field in terms of Newton’s inverse square law of gravitation and his laws of motion. Now laws (1) and (2) take their places. Our new pair of laws also correspond to what we have seen in electrodynamics. There we had our law—the set of Maxwell’s equations—which determines the fields produced by charges. It tells how the character of “space” is changed by the presence of charged matter, which is what law (1) does for gravity. In addition, we had a law about how particles move in the given fields—$d(m\FLPv)/dt=q(\FLPE+\FLPv\times\FLPB)$. This, for gravity, is done by law (2). In the laws (1) and (2) you have a precise statement of Einstein’s theory of gravitation—although you will usually find it stated in a more complicated mathematical form. We should, however, make one further addition. Just as time scales change from place to place in a gravitational field, so do also the length scales. Rulers change lengths as you move around. It is impossible with space and time so intimately mixed to have something happen with time that isn’t in some way reflected in space. Take even the simplest example: You are riding past the earth. What is “time” from your point of view is partly space from our point of view. So there must also be changes in space. It is the entire space-time which is distorted by the presence of matter, and this is more complicated than a change only in time scale. However, the rule that we gave in Eq. (42.3) is enough to determine completely all the laws of gravitation, provided that it is understood that this rule about the curvature of space applies not only from one man’s point of view but is true for everybody. Somebody riding by a mass of material sees a different mass content because of the kinetic energy he calculates for its motion past him, and he must include the mass corresponding to that energy. The theory must be arranged so that everybody—no matter how he moves—will, when he draws a sphere, find that the excess radius is $G/3c^2$ times the total mass (or, better, $G/3c^4$ times the total energy content) inside the sphere. That this law—law (1)—should be true in any moving system is one of the great laws of gravitation, called Einstein’s field equation. The other great law is (2)—that things must move so that the proper time is a maximum—and is called Einstein’s equation of motion. To write these laws in a complete algebraic form, to compare them with Newton’s laws, or to relate them to electrodynamics is difficult mathematically. But it is the way our most complete laws of the physics of gravity look today. Although they gave a result in agreement with Newton’s mechanics for the simple example we considered, they do not always do so. The three discrepancies first derived by Einstein have been experimentally confirmed: The orbit of Mercury is not a fixed ellipse; starlight passing near the sun is deflected twice as much as you would think; and the rates of clocks depend on their location in a gravitational field. Whenever the predictions of Einstein have been found to differ from the ideas of Newtonian mechanics Nature has chosen Einstein’s. Let’s summarize everything that we have said in the following way. First, time and distance rates depend on the place in space you measure them and on the time. This is equivalent to the statement that space-time is curved. From the measured area of a sphere we can define a predicted radius, $\sqrt{A/4\pi}$, but the actual measured radius will have an excess over this which is proportional (the constant is $G/3c^2$) to the total mass contained inside the sphere. This fixes the exact degree of the curvature of space-time. And the curvature must be the same no matter who is looking at the matter or how it is moving. Second, particles move on “straight lines” (trajectories of maximum proper time) in this curved space-time. This is the content of Einstein’s formulation of the laws of gravitation. |
|
3 | 1 | Quantum Behavior | 1 | Atomic mechanics | “Quantum mechanics” is the description of the behavior of matter and light in all its details and, in particular, of the happenings on an atomic scale. Things on a very small scale behave like nothing that you have any direct experience about. They do not behave like waves, they do not behave like particles, they do not behave like clouds, or billiard balls, or weights on springs, or like anything that you have ever seen. Newton thought that light was made up of particles, but then it was discovered that it behaves like a wave. Later, however (in the beginning of the twentieth century), it was found that light did indeed sometimes behave like a particle. Historically, the electron, for example, was thought to behave like a particle, and then it was found that in many respects it behaved like a wave. So it really behaves like neither. Now we have given up. We say: “It is like neither.” There is one lucky break, however—electrons behave just like light. The quantum behavior of atomic objects (electrons, protons, neutrons, photons, and so on) is the same for all, they are all “particle waves,” or whatever you want to call them. So what we learn about the properties of electrons (which we shall use for our examples) will apply also to all “particles,” including photons of light. The gradual accumulation of information about atomic and small-scale behavior during the first quarter of the 20th century, which gave some indications about how small things do behave, produced an increasing confusion which was finally resolved in 1926 and 1927 by Schrödinger, Heisenberg, and Born. They finally obtained a consistent description of the behavior of matter on a small scale. We take up the main features of that description in this chapter. Because atomic behavior is so unlike ordinary experience, it is very difficult to get used to, and it appears peculiar and mysterious to everyone—both to the novice and to the experienced physicist. Even the experts do not understand it the way they would like to, and it is perfectly reasonable that they should not, because all of direct, human experience and of human intuition applies to large objects. We know how large objects will act, but things on a small scale just do not act that way. So we have to learn about them in a sort of abstract or imaginative fashion and not by connection with our direct experience. In this chapter we shall tackle immediately the basic element of the mysterious behavior in its most strange form. We choose to examine a phenomenon which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery. We cannot make the mystery go away by “explaining” how it works. We will just tell you how it works. In telling you how it works we will have told you about the basic peculiarities of all quantum mechanics. |
|
3 | 2 | The Relation of Wave and Particle Viewpoints | 1 | Probability wave amplitudes | In this chapter we shall discuss the relationship of the wave and particle viewpoints. We already know, from the last chapter, that neither the wave viewpoint nor the particle viewpoint is correct. We would always like to present things accurately, or at least precisely enough that they will not have to be changed when we learn more—it may be extended, but it will not be changed! But when we try to talk about the wave picture or the particle picture, both are approximate, and both will change. Therefore what we learn in this chapter will not be accurate in a certain sense; we will deal with some half-intuitive arguments which will be made more precise later. But certain things will be changed a little bit when we interpret them correctly in quantum mechanics. We are doing this so that you can have some qualitative feeling for some quantum phenomena before we get into the mathematical details of quantum mechanics. Furthermore, all our experiences are with waves and with particles, and so it is rather handy to use the wave and particle ideas to get some understanding of what happens in given circumstances before we know the complete mathematics of the quantum-mechanical amplitudes. We shall try to indicate the weakest places as we go along, but most of it is very nearly correct—it is just a matter of interpretation. First of all, we know that the new way of representing the world in quantum mechanics—the new framework—is to give an amplitude for every event that can occur, and if the event involves the reception of one particle, then we can give the amplitude to find that one particle at different places and at different times. The probability of finding the particle is then proportional to the absolute square of the amplitude. In general, the amplitude to find a particle in different places at different times varies with position and time. In some special case it can be that the amplitude varies sinusoidally in space and time like $e^{i(\omega t-\FLPk\cdot\FLPr)}$, where $\FLPr$ is the vector position from some origin. (Do not forget that these amplitudes are complex numbers, not real numbers.) Such an amplitude varies according to a definite frequency $\omega$ and wave number $\FLPk$. Then it turns out that this corresponds to a classical limiting situation where we would have believed that we have a particle whose energy $E$ was known and is related to the frequency by \begin{equation} \label{Eq:III:2:1} E=\hbar\omega, \end{equation} and whose momentum $\FLPp$ is also known and is related to the wave number by \begin{equation} \label{Eq:III:2:2} \FLPp=\hbar\FLPk. \end{equation} (The symbol $\hbar$ represents the number $h$ divided by $2\pi$; $\hbar=h/2\pi$.) This means that the idea of a particle is limited. The idea of a particle—its location, its momentum, etc.—which we use so much, is in certain ways unsatisfactory. For instance, if an amplitude to find a particle at different places is given by $e^{i(\omega t-\FLPk\cdot\FLPr)}$, whose absolute square is a constant, that would mean that the probability of finding a particle is the same at all points. That means we do not know where it is—it can be anywhere—there is a great uncertainty in its location. On the other hand, if the position of a particle is more or less well known and we can predict it fairly accurately, then the probability of finding it in different places must be confined to a certain region, whose length we call $\Delta x$. Outside this region, the probability is zero. Now this probability is the absolute square of an amplitude, and if the absolute square is zero, the amplitude is also zero, so that we have a wave train whose length is $\Delta x$ (Fig. 2–1), and the wavelength (the distance between nodes of the waves in the train) of that wave train is what corresponds to the particle momentum. Here we encounter a strange thing about waves; a very simple thing which has nothing to do with quantum mechanics strictly. It is something that anybody who works with waves, even if he knows no quantum mechanics, knows: namely, we cannot define a unique wavelength for a short wave train. Such a wave train does not have a definite wavelength; there is an indefiniteness in the wave number that is related to the finite length of the train, and thus there is an indefiniteness in the momentum. |
|
3 | 2 | The Relation of Wave and Particle Viewpoints | 2 | Measurement of position and momentum | Let us consider two examples of this idea—to see the reason that there is an uncertainty in the position and/or the momentum, if quantum mechanics is right. We have also seen before that if there were not such a thing—if it were possible to measure the position and the momentum of anything simultaneously—we would have a paradox; it is fortunate that we do not have such a paradox, and the fact that such an uncertainty comes naturally from the wave picture shows that everything is mutually consistent. Here is one example which shows the relationship between the position and the momentum in a circumstance that is easy to understand. Suppose we have a single slit, and particles are coming from very far away with a certain energy—so that they are all coming essentially horizontally (Fig. 2–2). We are going to concentrate on the vertical components of momentum. All of these particles have a certain horizontal momentum $p_0$, say, in a classical sense. So, in the classical sense, the vertical momentum $p_y$, before the particle goes through the hole, is definitely known. The particle is moving neither up nor down, because it came from a source that is far away—and so the vertical momentum is of course zero. But now let us suppose that it goes through a hole whose width is $B$. Then after it has come out through the hole, we know the position vertically—the $y$-position—with considerable accuracy—namely $\pm B$.1 That is, the uncertainty in position, $\Delta y$, is of order $B$. Now we might also want to say, since we know the momentum is absolutely horizontal, that $\Delta p_y$ is zero; but that is wrong. We once knew the momentum was horizontal, but we do not know it any more. Before the particles passed through the hole, we did not know their vertical positions. Now that we have found the vertical position by having the particle come through the hole, we have lost our information on the vertical momentum! Why? According to the wave theory, there is a spreading out, or diffraction, of the waves after they go through the slit, just as for light. Therefore there is a certain probability that particles coming out of the slit are not coming exactly straight. The pattern is spread out by the diffraction effect, and the angle of spread, which we can define as the angle of the first minimum, is a measure of the uncertainty in the final angle. How does the pattern become spread? To say it is spread means that there is some chance for the particle to be moving up or down, that is, to have a component of momentum up or down. We say chance and particle because we can detect this diffraction pattern with a particle counter, and when the counter receives the particle, say at $C$ in Fig. 2–2, it receives the entire particle, so that, in a classical sense, the particle has a vertical momentum, in order to get from the slit up to $C$. To get a rough idea of the spread of the momentum, the vertical momentum $p_y$ has a spread which is equal to $p_0\,\Delta\theta$, where $p_0$ is the horizontal momentum. And how big is $\Delta\theta$ in the spread-out pattern? We know that the first minimum occurs at an angle $\Delta\theta$ such that the waves from one edge of the slit have to travel one wavelength farther than the waves from the other side—we worked that out before (Chapter 30 of Vol. I). Therefore $\Delta\theta$ is $\lambda/B$, and so $\Delta p_y$ in this experiment is $p_0\lambda/B$. Note that if we make $B$ smaller and make a more accurate measurement of the position of the particle, the diffraction pattern gets wider. So the narrower we make the slit, the wider the pattern gets, and the more is the likelihood that we would find that the particle has sidewise momentum. Thus the uncertainty in the vertical momentum is inversely proportional to the uncertainty of $y$. In fact, we see that the product of the two is equal to $p_0\lambda$. But $\lambda$ is the wavelength and $p_0$ is the momentum, and in accordance with quantum mechanics, the wavelength times the momentum is Planck’s constant $h$. So we obtain the rule that the uncertainties in the vertical momentum and in the vertical position have a product of the order $h$: \begin{equation} \label{Eq:III:2:3} \Delta y\,\Delta p_y\geq\hbar/2. \end{equation} We cannot prepare a system in which we know the vertical position of a particle and can predict how it will move vertically with greater certainty than given by (2.3). That is, the uncertainty in the vertical momentum must exceed $\hbar/2\Delta y$, where $\Delta y$ is the uncertainty in our knowledge of the position. Sometimes people say quantum mechanics is all wrong. When the particle arrived from the left, its vertical momentum was zero. And now that it has gone through the slit, its position is known. Both position and momentum seem to be known with arbitrary accuracy. It is quite true that we can receive a particle, and on reception determine what its position is and what its momentum would have had to have been to have gotten there. That is true, but that is not what the uncertainty relation (2.3) refers to. Equation (2.3) refers to the predictability of a situation, not remarks about the past. It does no good to say “I knew what the momentum was before it went through the slit, and now I know the position,” because now the momentum knowledge is lost. The fact that it went through the slit no longer permits us to predict the vertical momentum. We are talking about a predictive theory, not just measurements after the fact. So we must talk about what we can predict. Now let us take the thing the other way around. Let us take another example of the same phenomenon, a little more quantitatively. In the previous example we measured the momentum by a classical method. Namely, we considered the direction and the velocity and the angles, etc., so we got the momentum by classical analysis. But since momentum is related to wave number, there exists in nature still another way to measure the momentum of a particle—photon or otherwise—which has no classical analog, because it uses Eq. (2.2). We measure the wavelengths of the waves. Let us try to measure momentum in this way. Suppose we have a grating with a large number of lines (Fig. 2–3), and send a beam of particles at the grating. We have often discussed this problem: if the particles have a definite momentum, then we get a very sharp pattern in a certain direction, because of the interference. And we have also talked about how accurately we can determine that momentum, that is to say, what the resolving power of such a grating is. Rather than derive it again, we refer to Chapter 30 of Volume I, where we found that the relative uncertainty in the wavelength that can be measured with a given grating is $1/Nm$, where $N$ is the number of lines on the grating and $m$ is the order of the diffraction pattern. That is, \begin{equation} \label{Eq:III:2:4} \Delta\lambda/\lambda=1/Nm. \end{equation} Now formula (2.4) can be rewritten as \begin{equation} \label{Eq:III:2:5} \Delta\lambda/\lambda^2=1/Nm\lambda=1/L, \end{equation} where $L$ is the distance shown in Fig. 2–3. This distance is the difference between the total distance that the particle or wave or whatever it is has to travel if it is reflected from the bottom of the grating, and the distance that it has to travel if it is reflected from the top of the grating. That is, the waves which form the diffraction pattern are waves which come from different parts of the grating. The first ones that arrive come from the bottom end of the grating, from the beginning of the wave train, and the rest of them come from later parts of the wave train, coming from different parts of the grating, until the last one finally arrives, and that involves a point in the wave train a distance $L$ behind the first point. So in order that we shall have a sharp line in our spectrum corresponding to a definite momentum, with an uncertainty given by (2.4), we have to have a wave train of at least length $L$. If the wave train is too short, we are not using the entire grating. The waves which form the spectrum are being reflected from only a very short sector of the grating if the wave train is too short, and the grating will not work right—we will find a big angular spread. In order to get a narrower one, we need to use the whole grating, so that at least at some moment the whole wave train is scattering simultaneously from all parts of the grating. Thus the wave train must be of length $L$ in order to have an uncertainty in the wavelength less than that given by (2.5). Incidentally, \begin{equation} \label{Eq:III:2:6} \Delta\lambda/\lambda^2=\Delta(1/\lambda)=\Delta k/2\pi. \end{equation} Therefore \begin{equation} \label{Eq:III:2:7} \Delta k = 2\pi/L, \end{equation} where $L$ is the length of the wave train. This means that if we have a wave train whose length is less than $L$, the uncertainty in the wave number must exceed $2\pi/L$. Or the uncertainty in a wave number times the length of the wave train—we will call that for a moment $\Delta x$—exceeds $2\pi$. We call it $\Delta x$ because that is the uncertainty in the location of the particle. If the wave train exists only in a finite length, then that is where we could find the particle, within an uncertainty $\Delta x$. Now this property of waves, that the length of the wave train times the uncertainty of the wave number associated with it is at least $2\pi$, is a property that is known to everyone who studies them. It has nothing to do with quantum mechanics. It is simply that if we have a finite train, we cannot count the waves in it very precisely. Let us try another way to see the reason for that. Suppose that we have a finite train of length $L$; then because of the way it has to decrease at the ends, as in Fig. 2–1, the number of waves in the length $L$ is uncertain by something like $\pm1$. But the number of waves in $L$ is $kL/2\pi$. Thus $k$ is uncertain, and we again get the result (2.7), a property merely of waves. The same thing works whether the waves are in space and $k$ is the number of radians per centimeter and $L$ is the length of the train, or the waves are in time and $\omega$ is the number of radians per second and $T$ is the “length” in time that the wave train comes in. That is, if we have a wave train lasting only for a certain finite time $T$, then the uncertainty in the frequency is given by \begin{equation} \label{Eq:III:2:8} \Delta\omega=2\pi/T. \end{equation} We have tried to emphasize that these are properties of waves alone, and they are well known, for example, in the theory of sound. The point is that in quantum mechanics we interpret the wave number as being a measure of the momentum of a particle, with the rule that $p=\hbar k$, so that relation (2.7) tells us that $\Delta p\approx h/\Delta x$. This, then, is a limitation of the classical idea of momentum. (Naturally, it has to be limited in some ways if we are going to represent particles by waves!) It is nice that we have found a rule that gives us some idea of when there is a failure of classical ideas. |
|
3 | 3 | Probability Amplitudes | 1 | The laws for combining amplitudes | When Schrödinger first discovered the correct laws of quantum mechanics, he wrote an equation which described the amplitude to find a particle in various places. This equation was very similar to the equations that were already known to classical physicists—equations that they had used in describing the motion of air in a sound wave, the transmission of light, and so on. So most of the time at the beginning of quantum mechanics was spent in solving this equation. But at the same time an understanding was being developed, particularly by Born and Dirac, of the basically new physical ideas behind quantum mechanics. As quantum mechanics developed further, it turned out that there were a large number of things which were not directly encompassed in the Schrödinger equation—such as the spin of the electron, and various relativistic phenomena. Traditionally, all courses in quantum mechanics have begun in the same way, retracing the path followed in the historical development of the subject. One first learns a great deal about classical mechanics so that he will be able to understand how to solve the Schrödinger equation. Then he spends a long time working out various solutions. Only after a detailed study of this equation does he get to the “advanced” subject of the electron’s spin. We had also originally considered that the right way to conclude these lectures on physics was to show how to solve the equations of classical physics in complicated situations—such as the description of sound waves in enclosed regions, modes of electromagnetic radiation in cylindrical cavities, and so on. That was the original plan for this course. However, we have decided to abandon that plan and to give instead an introduction to the quantum mechanics. We have come to the conclusion that what are usually called the advanced parts of quantum mechanics are, in fact, quite simple. The mathematics that is involved is particularly simple, involving simple algebraic operations and no differential equations or at most only very simple ones. The only problem is that we must jump the gap of no longer being able to describe the behavior in detail of particles in space. So this is what we are going to try to do: to tell you about what conventionally would be called the “advanced” parts of quantum mechanics. But they are, we assure you, by all odds the simplest parts—in a deep sense of the word—as well as the most basic parts. This is frankly a pedagogical experiment; it has never been done before, as far as we know. In this subject we have, of course, the difficulty that the quantum mechanical behavior of things is quite strange. Nobody has an everyday experience to lean on to get a rough, intuitive idea of what will happen. So there are two ways of presenting the subject: We could either describe what can happen in a rather rough physical way, telling you more or less what happens without giving the precise laws of everything; or we could, on the other hand, give the precise laws in their abstract form. But, then because of the abstractions, you wouldn’t know what they were all about, physically. The latter method is unsatisfactory because it is completely abstract, and the first way leaves an uncomfortable feeling because one doesn’t know exactly what is true and what is false. We are not sure how to overcome this difficulty. You will notice, in fact, that Chapters 1 and 2 showed this problem. The first chapter was relatively precise; but the second chapter was a rough description of the characteristics of different phenomena. Here, we will try to find a happy medium between the two extremes. We will begin in this chapter by dealing with some general quantum mechanical ideas. Some of the statements will be quite precise, others only partially precise. It will be hard to tell you as we go along which is which, but by the time you have finished the rest of the book, you will understand in looking back which parts hold up and which parts were only explained roughly. The chapters which follow this one will not be so imprecise. In fact, one of the reasons we have tried carefully to be precise in the succeeding chapters is so that we can show you one of the most beautiful things about quantum mechanics—how much can be deduced from so little. We begin by discussing again the superposition of probability amplitudes. As an example we will refer to the experiment described in Chapter 1, and shown again here in Fig. 3–1. There is a source $s$ of particles, say electrons; then there is a wall with two slits in it; after the wall, there is a detector located at some position $x$. We ask for the probability that a particle will be found at $x$. Our first general principle in quantum mechanics is that the probability that a particle will arrive at $x$, when let out at the source $s$, can be represented quantitatively by the absolute square of a complex number called a probability amplitude—in this case, the “amplitude that a particle from $s$ will arrive at $x$.” We will use such amplitudes so frequently that we will use a shorthand notation—invented by Dirac and generally used in quantum mechanics—to represent this idea. We write the probability amplitude this way: \begin{equation} \label{Eq:III:3:1} \braket{\text{Particle arrives at $x$}}{\text{particle leaves $s$}}. \end{equation} In other words, the two brackets $\langle\:\rangle$ are a sign equivalent to “the amplitude that”; the expression at the right of the vertical line always gives the starting condition, and the one at the left, the final condition. Sometimes it will also be convenient to abbreviate still more and describe the initial and final conditions by single letters. For example, we may on occasion write the amplitude (3.1) as \begin{equation} \label{Eq:III:3:2} \braket{x}{s}. \end{equation} We want to emphasize that such an amplitude is, of course, just a single number—a complex number. We have already seen in the discussion of Chapter 1 that when there are two ways for the particle to reach the detector, the resulting probability is not the sum of the two probabilities, but must be written as the absolute square of the sum of two amplitudes. We had that the probability that an electron arrives at the detector when both paths are open is \begin{equation} \label{Eq:III:3:3} P_{12}=\abs{\phi_1+\phi_2}^2. \end{equation} We wish now to put this result in terms of our new notation. First, however, we want to state our second general principle of quantum mechanics: When a particle can reach a given state by two possible routes, the total amplitude for the process is the sum of the amplitudes for the two routes considered separately. In our new notation we write that \begin{equation} \label{Eq:III:3:4} \braket{x}{s}_{\text{both holes open}}= \braket{x}{s}_{\text{through $1$}}+ \braket{x}{s}_{\text{through $2$}}. \end{equation} Incidentally, we are going to suppose that the holes $1$ and $2$ are small enough that when we say an electron goes through the hole, we don’t have to discuss which part of the hole. We could, of course, split each hole into pieces with a certain amplitude that the electron goes to the top of the hole and the bottom of the hole and so on. We will suppose that the hole is small enough so that we don’t have to worry about this detail. That is part of the roughness involved; the matter can be made more precise, but we don’t want to do so at this stage. Now we want to write out in more detail what we can say about the amplitude for the process in which the electron reaches the detector at $x$ by way of hole $1$. We can do that by using our third general principle: When a particle goes by some particular route the amplitude for that route can be written as the product of the amplitude to go part way with the amplitude to go the rest of the way. For the setup of Fig. 3–1 the amplitude to go from $s$ to $x$ by way of hole $1$ is equal to the amplitude to go from $s$ to $1$, multiplied by the amplitude to go from $1$ to $x$. \begin{equation} \label{Eq:III:3:5} \braket{x}{s}_{\text{via $1$}}=\braket{x}{1}\braket{1}{s}. \end{equation} Again this result is not completely precise. We should also include a factor for the amplitude that the electron will get through the hole at $1$; but in the present case it is a simple hole, and we will take this factor to be unity. You will note that Eq. (3.5) appears to be written in reverse order. It is to be read from right to left: The electron goes from $s$ to $1$ and then from $1$ to $x$. In summary, if events occur in succession—that is, if you can analyze one of the routes of the particle by saying it does this, then it does this, then it does that—the resultant amplitude for that route is calculated by multiplying in succession the amplitude for each of the successive events. Using this law we can rewrite Eq. (3.4) as \begin{equation*} \braket{x}{s}_{\text{both}}=\braket{x}{1}\braket{1}{s}+ \braket{x}{2}\braket{2}{s}. \end{equation*} Now we wish to show that just using these principles we can calculate a much more complicated problem like the one shown in Fig. 3–2. Here we have two walls, one with two holes, $1$ and $2$, and another which has three holes, $a$, $b$, and $c$. Behind the second wall there is a detector at $x$, and we want to know the amplitude for a particle to arrive there. Well, one way you can find this is by calculating the superposition, or interference, of the waves that go through; but you can also do it by saying that there are six possible routes and superposing an amplitude for each. The electron can go through hole $1$, then through hole $a$, and then to $x$; or it could go through hole $1$, then through hole $b$, and then to $x$; and so on. According to our second principle, the amplitudes for alternative routes add, so we should be able to write the amplitude from $s$ to $x$ as a sum of six separate amplitudes. On the other hand, using the third principle, each of these separate amplitudes can be written as a product of three amplitudes. For example, one of them is the amplitude for $s$ to $1$, times the amplitude for $1$ to $a$, times the amplitude for $a$ to $x$. Using our shorthand notation, we can write the complete amplitude to go from $s$ to $x$ as \begin{equation*} \braket{x}{s}= \braket{x}{a}\braket{a}{1}\braket{1}{s}+ \braket{x}{b}\braket{b}{1}\braket{1}{s}+ \dotsb+ \braket{x}{c}\braket{c}{2}\braket{2}{s}. \end{equation*}
\begin{align*} \braket{x}{s}&= \braket{x}{a}\braket{a}{1}\braket{1}{s}+ \braket{x}{b}\braket{b}{1}\braket{1}{s}+\\[.75ex] &\dotsb+\braket{x}{c}\braket{c}{2}\braket{2}{s}. \end{align*} We can save writing by using the summation notation \begin{equation} \label{Eq:III:3:6} \braket{x}{s}=\sum_{\substack{i=1,2\\\alpha=a,b,c}} \braket{x}{\alpha}\braket{\alpha}{i}\braket{i}{s}. \end{equation} In order to make any calculations using these methods, it is, naturally, necessary to know the amplitude to get from one place to another. We will give a rough idea of a typical amplitude. It leaves out certain things like the polarization of light or the spin of the electron, but aside from such features it is quite accurate. We give it so that you can solve problems involving various combinations of slits. Suppose a particle with a definite energy is going in empty space from a location $\FLPr_1$ to a location $\FLPr_2$. In other words, it is a free particle with no forces on it. Except for a numerical factor in front, the amplitude to go from $\FLPr_1$ to $\FLPr_2$ is \begin{equation} \label{Eq:III:3:7} \braket{\FLPr_2}{\FLPr_1}=\frac{e^{ipr_{12}/\hbar}}{r_{12}}, \end{equation} where $r_{12}=\abs{\FLPr_2-\FLPr_1}$, and $p$ is the momentum which is related to the energy $E$ by the relativistic equation \begin{equation*} p^2c^2=E^2-(m_0c^2)^2, \end{equation*} or the nonrelativistic equation \begin{equation*} \frac{p^2}{2m}=\text{Kinetic energy}. \end{equation*} Equation (3.7) says in effect that the particle has wavelike properties, the amplitude propagating as a wave with a wave number equal to the momentum divided by $\hbar$. In the most general case, the amplitude and the corresponding probability will also involve the time. For most of these initial discussions we will suppose that the source always emits the particles with a given energy so we will not need to worry about the time. But we could, in the general case, be interested in some other questions. Suppose that a particle is liberated at a certain place $P$ at a certain time, and you would like to know the amplitude for it to arrive at some location, say $\FLPr$, at some later time. This could be represented symbolically as the amplitude $\braket{\FLPr,t=t_1}{P,t=0}$. Clearly, this will depend upon both $\FLPr$ and $t$. You will get different results if you put the detector in different places and measure at different times. This function of $\FLPr$ and $t$, in general, satisfies a differential equation which is a wave equation. For example, in a nonrelativistic case it is the Schrödinger equation. One has then a wave equation analogous to the equation for electromagnetic waves or waves of sound in a gas. However, it must be emphasized that the wave function that satisfies the equation is not like a real wave in space; one cannot picture any kind of reality to this wave as one does for a sound wave. Although one may be tempted to think in terms of “particle waves” when dealing with one particle, it is not a good idea, for if there are, say, two particles, the amplitude to find one at $\FLPr_1$ and the other at $\FLPr_2$ is not a simple wave in three-dimensional space, but depends on the six space variables $\FLPr_1$ and $\FLPr_2$. If we are, for example, dealing with two (or more) particles, we will need the following additional principle: Provided that the two particles do not interact, the amplitude that one particle will do one thing and the other one something else is the product of the two amplitudes that the two particles would do the two things separately. For example, if $\braket{a}{s_1}$ is the amplitude for particle $1$ to go from $s_1$ to $a$, and $\braket{b}{s_2}$ is the amplitude for particle $2$ to go from $s_2$ to $b$, the amplitude that both things will happen together is \begin{equation*} \braket{a}{s_1}\braket{b}{s_2}. \end{equation*} There is one more point to emphasize. Suppose that we didn’t know where the particles in Fig. 3–2 come from before arriving at holes $1$ and $2$ of the first wall. We can still make a prediction of what will happen beyond the wall (for example, the amplitude to arrive at $x$) provided that we are given two numbers: the amplitude to have arrived at $1$ and the amplitude to have arrived at $2$. In other words, because of the fact that the amplitude for successive events multiplies, as shown in Eq. (3.6), all you need to know to continue the analysis is two numbers—in this particular case $\braket{1}{s}$ and $\braket{2}{s}$. These two complex numbers are enough to predict all the future. That is what really makes quantum mechanics easy. It turns out that in later chapters we are going to do just such a thing when we specify a starting condition in terms of two (or a few) numbers. Of course, these numbers depend upon where the source is located and possibly other details about the apparatus, but given the two numbers, we do not need to know any more about such details. |
|
3 | 3 | Probability Amplitudes | 2 | The two-slit interference pattern | Now we would like to consider a matter which was discussed in some detail in Chapter 1. This time we will do it with the full glory of the amplitude idea to show you how it works out. We take the same experiment shown in Fig. 3–1, but now with the addition of a light source behind the two holes, as shown in Fig. 3–3. In Chapter 1, we discovered the following interesting result. If we looked behind slit $1$ and saw a photon scattered from there, then the distribution obtained for the electrons at $x$ in coincidence with these photons was the same as though slit $2$ were closed. The total distribution for electrons that had been “seen” at either slit $1$ or slit $2$ was the sum of the separate distributions and was completely different from the distribution with the light turned off. This was true at least if we used light of short enough wavelength. If the wavelength was made longer so we could not be sure at which hole the scattering had occurred, the distribution became more like the one with the light turned off. Let’s examine what is happening by using our new notation and the principles of combining amplitudes. To simplify the writing, we can again let $\phi_1$ stand for the amplitude that the electron will arrive at $x$ by way of hole $1$, that is, \begin{equation*} \phi_1=\braket{x}{1}\braket{1}{s}. \end{equation*} Similarly, we’ll let $\phi_2$ stand for the amplitude that the electron gets to the detector by way of hole $2$: \begin{equation*} \phi_2=\braket{x}{2}\braket{2}{s}. \end{equation*} These are the amplitudes to go through the two holes and arrive at $x$ if there is no light. Now if there is light, we ask ourselves the question: What is the amplitude for the process in which the electron starts at $s$ and a photon is liberated by the light source $L$, ending with the electron at $x$ and a photon seen behind slit $1$? Suppose that we observe the photon behind slit $1$ by means of a detector $D_1$, as shown in Fig. 3–3, and use a similar detector $D_2$ to count photons scattered behind hole $2$. There will be an amplitude for a photon to arrive at $D_1$ and an electron at $x$, and also an amplitude for a photon to arrive at $D_2$ and an electron at $x$. Let’s try to calculate them. Although we don’t have the correct mathematical formula for all the factors that go into this calculation, you will see the spirit of it in the following discussion. First, there is the amplitude $\braket{1}{s}$ that an electron goes from the source to hole $1$. Then we can suppose that there is a certain amplitude that while the electron is at hole $1$ it scatters a photon into the detector $D_1$. Let us represent this amplitude by $a$. Then there is the amplitude $\braket{x}{1}$ that the electron goes from slit $1$ to the electron detector at $x$. The amplitude that the electron goes from $s$ to $x$ via slit $1$ and scatters a photon into $D_1$ is then \begin{equation*} \braket{x}{1}\,a\,\braket{1}{s}. \end{equation*} Or, in our previous notation, it is just $a\phi_1$. There is also some amplitude that an electron going through slit $2$ will scatter a photon into counter $D_1$. You say, “That’s impossible; how can it scatter into counter $D_1$ if it is only looking at hole $1$?” If the wavelength is long enough, there are diffraction effects, and it is certainly possible. If the apparatus is built well and if we use photons of short wavelength, then the amplitude that a photon will be scattered into detector $1$, from an electron at $2$ is very small. But to keep the discussion general we want to take into account that there is always some such amplitude, which we will call $b$. Then the amplitude that an electron goes via slit $2$ and scatters a photon into $D_1$ is \begin{equation*} \braket{x}{2}\,b\,\braket{2}{s}=b\phi_2. \end{equation*} The amplitude to find the electron at $x$ and the photon in $D_1$ is the sum of two terms, one for each possible path for the electron. Each term is in turn made up of two factors: first, that the electron went through a hole, and second, that the photon is scattered by such an electron into detector $1$; we have \begin{equation} \label{Eq:III:3:8} \biggl\langle \begin{subarray}{l} \displaystyle \text{electron at $x$}\\[1ex] \displaystyle \text{photon at $D_1$} \end{subarray} \!\! \biggm| \! \begin{subarray}{l} \displaystyle \text{electron from $s$}\\[1ex] \displaystyle \text{photon from $L$} \end{subarray} \biggr\rangle= a\phi_1\!+b\phi_2. \end{equation} We can get a similar expression when the photon is found in the other detector $D_2$. If we assume for simplicity that the system is symmetrical, then $a$ is also the amplitude for a photon in $D_2$ when an electron passes through hole $2$, and $b$ is the amplitude for a photon in $D_2$ when the electron passes through hole $1$. The corresponding total amplitude for a photon at $D_2$ and an electron at $x$ is \begin{equation} \label{Eq:III:3:9} \biggl\langle \begin{subarray}{l} \displaystyle \text{electron at $x$}\\[1ex] \displaystyle \text{photon at $D_2$} \end{subarray} \!\! \biggm| \! \begin{subarray}{l} \displaystyle \text{electron from $s$}\\[1ex] \displaystyle \text{photon from $L$} \end{subarray} \biggr\rangle= a\phi_2\!+b\phi_1. \end{equation} Now we are finished. We can easily calculate the probability for various situations. Suppose that we want to know with what probability we get a count in $D_1$ and an electron at $x$. That will be the absolute square of the amplitude given in Eq. (3.8), namely, just $\abs{a\phi_1+b\phi_2}^2$. Let’s look more carefully at this expression. First of all, if $b$ is zero—which is the way we would like to design the apparatus—then the answer is simply $\abs{\phi_1}^2$ diminished in total amplitude by the factor $\abs{a}^2$. This is the probability distribution that you would get if there were only one hole—as shown in the graph of Fig. 3–4(a). On the other hand, if the wavelength is very long, the scattering behind hole $2$ into $D_1$ may be just about the same as for hole $1$. Although there may be some phases involved in $a$ and $b$, we can ask about a simple case in which the two phases are equal. If $a$ is practically equal to $b$, then the total probability becomes $\abs{\phi_1+\phi_2}^2$ multiplied by $\abs{a}^2$, since the common factor $a$ can be taken out. This, however, is just the probability distribution we would have gotten without the photons at all. Therefore, in the case that the wavelength is very long—and the photon detection ineffective—you return to the original distribution curve which shows interference effects, as shown in Fig. 3–4(b). In the case that the detection is partially effective, there is an interference between a lot of $\phi_1$ and a little of $\phi_2$, and you will get an intermediate distribution such as is sketched in Fig. 3–4(c). Needless to say, if we look for coincidence counts of photons at $D_2$ and electrons at $x$, we will get the same kinds of results. If you remember the discussion in Chapter 1, you will see that these results give a quantitative description of what was described there. Now we would like to emphasize an important point so that you will avoid a common error. Suppose that you only want the amplitude that the electron arrives at $x$, regardless of whether the photon was counted at $D_1$ or $D_2$. Should you add the amplitudes given in Eqs. (3.8) and (3.9)? No! You must never add amplitudes for different and distinct final states. Once the photon is accepted by one of the photon counters, we can always determine which alternative occurred if we want, without any further disturbance to the system. Each alternative has a probability completely independent of the other. To repeat, do not add amplitudes for different final conditions, where by “final” we mean at that moment the probability is desired—that is, when the experiment is “finished.” You do add the amplitudes for the different indistinguishable alternatives inside the experiment, before the complete process is finished. At the end of the process you may say that you “don’t want to look at the photon.” That’s your business, but you still do not add the amplitudes. Nature does not know what you are looking at, and she behaves the way she is going to behave whether you bother to take down the data or not. So here we must not add the amplitudes. We first square the amplitudes for all possible different final events and then sum. The correct result for an electron at $x$ and a photon at either $D_1$ or $D_2$ is \begin{gather} \biggl\lvert \biggl\langle \begin{subarray}{l} \displaystyle \text{e at $x$}\\[1ex] \displaystyle \text{ph at $D_1$} \end{subarray}\!\! \biggm|\! \begin{subarray}{l} \displaystyle \text{e from $s$}\\[1ex] \displaystyle \text{ph from $L$} \end{subarray} \biggr\rangle \biggr\rvert^2\!\!+ \biggl\lvert \biggl\langle \begin{subarray}{l} \displaystyle \text{e at $x$}\\[1ex] \displaystyle \text{ph at $D_2$} \end{subarray}\!\! \biggm|\! \begin{subarray}{l} \displaystyle \text{e from $s$}\\[1ex] \displaystyle \text{ph from $L$} \end{subarray} \biggr\rangle \biggr\rvert^2\notag\\[2ex] \label{Eq:III:3:10} =\abs{a\phi_1\!+b\phi_2}^2+\;\abs{a\phi_2\!+b\phi_1}^2. \end{gather} |
|
3 | 3 | Probability Amplitudes | 3 | Scattering from a crystal | Our next example is a phenomenon in which we have to analyze the interference of probability amplitudes somewhat carefully. We look at the process of the scattering of neutrons from a crystal. Suppose we have a crystal which has a lot of atoms with nuclei at their centers, arranged in a periodic array, and a neutron beam that comes from far away. We can label the various nuclei in the crystal by an index $i$, where $i$ runs over the integers $1$, $2$, $3$, …, $N$, with $N$ equal to the total number of atoms. The problem is to calculate the probability of getting a neutron into a counter with the arrangement shown in Fig. 3–5. For any particular atom $i$, the amplitude that the neutron arrives at the counter $C$ is the amplitude that the neutron gets from the source $S$ to nucleus $i$, multiplied by the amplitude $a$ that it gets scattered there, multiplied by the amplitude that it gets from $i$ to the counter $C$. Let’s write that down: \begin{equation} \label{Eq:III:3:11} \braket{\text{neutron at $C$}}{\text{neutron from $S$}}_{\text{via $i$}}= \braket{C}{i}\,a\,\braket{i}{S}. \end{equation}
\begin{gather} \braket{\text{neutron at $C$}}{\text{neutron from $S$}}_{\text{via $i$}}\notag\\[.5ex] \label{Eq:III:3:11} =\braket{C}{i}\,a\,\braket{i}{S}. \end{gather} In writing this equation we have assumed that the scattering amplitude $a$ is the same for all atoms. We have here a large number of apparently indistinguishable routes. They are indistinguishable because a low-energy neutron is scattered from a nucleus without knocking the atom out of its place in the crystal—no “record” is left of the scattering. According to the earlier discussion, the total amplitude for a neutron at $C$ involves a sum of Eq. (3.11) over all the atoms: \begin{equation} \label{Eq:III:3:12} \braket{\text{neutron at $C$}}{\text{neutron from $S$}}= \sum_{i=1}^N\braket{C}{i}\,a\,\braket{i}{S}. \end{equation}
\begin{gather} \braket{\text{neutron at $C$}}{\text{neutron from $S$}}\notag\\[.75ex] \label{Eq:III:3:12} =\sum_{i=1}^N\braket{C}{i}\,a\,\braket{i}{S}. \end{gather} Because we are adding amplitudes of scattering from atoms with different space positions, the amplitudes will have different phases giving the characteristic interference pattern that we have already analyzed in the case of the scattering of light from a grating. The neutron intensity as a function of angle in such an experiment is indeed often found to show tremendous variations, with very sharp interference peaks and almost nothing in between—as shown in Fig. 3–6(a). However, for certain kinds of crystals it does not work this way, and there is—along with the interference peaks discussed above—a general background of scattering in all directions. We must try to understand the apparently mysterious reasons for this. Well, we have not considered one important property of the neutron. It has a spin of one-half, and so there are two conditions in which it can be: either spin “up” (say perpendicular to the page in Fig. 3–5) or spin “down.” If the nuclei of the crystal have no spin, the neutron spin doesn’t have any effect. But when the nuclei of the crystal also have a spin, say a spin of one-half, you will observe the background of smeared-out scattering described above. The explanation is as follows. If the neutron has one direction of spin and the atomic nucleus has the same spin, then no change of spin can occur in the scattering process. If the neutron and atomic nucleus have opposite spin, then scattering can occur by two processes, one in which the spins are unchanged and another in which the spin directions are exchanged. This rule for no net change of the sum of the spins is analogous to our classical law of conservation of angular momentum. We can begin to understand the phenomenon if we assume that all the scattering nuclei are set up with spins in one direction. A neutron with the same spin will scatter with the expected sharp interference distribution. What about one with opposite spin? If it scatters without spin flip, then nothing is changed from the above; but if the two spins flip over in the scattering, we could, in principle, find out which nucleus had done the scattering, since it would be the only one with spin turned over. Well, if we can tell which atom did the scattering, what have the other atoms got to do with it? Nothing, of course. The scattering is exactly the same as that from a single atom. To include this effect, the mathematical formulation of Eq. (3.12) must be modified since we haven’t described the states completely in that analysis. Let’s start with all neutrons from the source having spin up and all the nuclei of the crystal having spin down. First, we would like the amplitude that at the counter the spin of the neutron is up and all spins of the crystal are still down. This is not different from our previous discussion. We will let $a$ be the amplitude to scatter with no spin flip. The amplitude for scattering from the $i$th atom is, of course, \begin{equation*} \braket{\text{$C_{\text{up}}$, crystal all down}} {\text{$S_{\text{up}}$, crystal all down}}= \braket{C}{i}\,a\,\braket{i}{S}. \end{equation*}
\begin{gather*} \braket{\text{$C_{\text{up}}$, crystal all down}} {\text{$S_{\text{up}}$, crystal all down}}\\[.5ex] =\braket{C}{i}\,a\,\braket{i}{S}. \end{gather*} Since all the atomic spins are still down, the various alternatives (different values of $i$) cannot be distinguished. There is clearly no way to tell which atom did the scattering. For this process, all the amplitudes interfere. We have another case, however, where the spin of the detected neutron is down although it started from $S$ with spin up. In the crystal, one of the spins must be changed to the up direction—let’s say that of the $k$th atom. We will assume that there is the same scattering amplitude with spin flip for every atom, namely $b$. (In a real crystal there is the disagreeable possibility that the reversed spin moves to some other atom, but let’s take the case of a crystal for which this probability is very low.) The scattering amplitude is then \begin{equation} \label{Eq:III:3:13} \braket{\text{$C_{\text{down}}$, nucleus $k$ up}} {\text{$S_{\text{up}}$, crystal all down}}= \braket{C}{k}\,b\,\braket{k}{S}. \end{equation}
\begin{gather} \braket{\text{$C_{\text{down}}$, nucleus $k$ up}} {\text{$S_{\text{up}}$, crystal all down}}\notag\\[.5ex] \label{Eq:III:3:13} =\braket{C}{k}\,b\,\braket{k}{S}. \end{gather} If we ask for the probability of finding the neutron spin down and the $k$th nucleus spin up, it is equal to the absolute square of this amplitude, which is simply $\abs{b}^2$ times $\abs{\braket{C}{k}\braket{k}{S}}^2$. The second factor is almost independent of location in the crystal, and all phases have disappeared in taking the absolute square. The probability of scattering from any nucleus in the crystal with spin flip is now \begin{equation*} \abs{b}^2\sum_{k=1}^N\abs{\braket{C}{k}\braket{k}{S}}^2, \end{equation*} which will show a smooth distribution as in Fig. 3–6(b). You may argue, “I don’t care which atom is up.” Perhaps you don’t, but nature knows; and the probability is, in fact, what we gave above—there isn’t any interference. On the other hand, if we ask for the probability that the spin is up at the detector and all the atoms still have spin down, then we must take the absolute square of \begin{equation*} \sum_{i=1}^N\braket{C}{i}\,a\,\braket{i}{S}. \end{equation*} Since the terms in this sum have phases, they do interfere, and we get a sharp interference pattern. If we do an experiment in which we don’t observe the spin of the detected neutron, then both kinds of events can occur; and the separate probabilities add. The total probability (or counting rate) as a function of angle then looks like the graph in Fig. 3–6(c). Let’s review the physics of this experiment. If you could, in principle, distinguish the alternative final states (even though you do not bother to do so), the total, final probability is obtained by calculating the probability for each state (not the amplitude) and then adding them together. If you cannot distinguish the final states even in principle, then the probability amplitudes must be summed before taking the absolute square to find the actual probability. The thing you should notice particularly is that if you were to try to represent the neutron by a wave alone, you would get the same kind of distribution for the scattering of a down-spinning neutron as for an up-spinning neutron. You would have to say that the “wave” would come from all the different atoms and interfere just as for the up-spinning one with the same wavelength. But we know that is not the way it works. So as we stated earlier, we must be careful not to attribute too much reality to the waves in space. They are useful for certain problems but not for all. |
|
3 | 4 | Identical Particles | 1 | Bose particles and Fermi particles | In the last chapter we began to consider the special rules for the interference that occurs in processes with two identical particles. By identical particles we mean things like electrons which can in no way be distinguished one from another. If a process involves two particles that are identical, reversing which one arrives at a counter is an alternative which cannot be distinguished and—like all cases of alternatives which cannot be distinguished—interferes with the original, unexchanged case. The amplitude for an event is then the sum of the two interfering amplitudes; but, interestingly enough, the interference is in some cases with the same phase and, in others, with the opposite phase. Suppose we have a collision of two particles $a$ and $b$ in which particle $a$ scatters in the direction $1$ and particle $b$ scatters in the direction $2$, as sketched in Fig. 4–1(a). Let’s call $f(\theta)$ the amplitude for this process; then the probability $P_1$ of observing such an event is proportional to $\abs{f(\theta)}^2$. Of course, it could also happen that particle $b$ scattered into counter $1$ and particle $a$ went into counter $2$, as shown in Fig. 4–1(b). Assuming that there are no special directions defined by spins or such, the probability $P_2$ for this process is just $\abs{f(\pi-\theta)}^2$, because it is just equivalent to the first process with counter $1$ moved over to the angle $\pi-\theta$. You might also think that the amplitude for the second process is just $f(\pi-\theta)$. But that is not necessarily so, because there could be an arbitrary phase factor. That is, the amplitude could be \begin{equation*} e^{i\delta}f(\pi-\theta). \end{equation*} Such an amplitude still gives a probability $P_2$ equal to $\abs{f(\pi-\theta)}^2$. Now let’s see what happens if $a$ and $b$ are identical particles. Then the two different processes shown in the two diagrams of Fig. 4–1 cannot be distinguished. There is an amplitude that either $a$ or $b$ goes into counter $1$, while the other goes into counter $2$. This amplitude is the sum of the amplitudes for the two processes shown in Fig. 4–1. If we call the first one $f(\theta)$, then the second one is $e^{i\delta}f(\pi-\theta)$, where now the phase factor is very important because we are going to be adding two amplitudes. Suppose we have to multiply the amplitude by a certain phase factor when we exchange the roles of the two particles. If we exchange them again we should get the same factor again. But we are then back to the first process. The phase factor taken twice must bring us back where we started—its square must be equal to $1$. There are only two possibilities: $e^{i\delta}$ is equal to $+1$, or is equal to $-1$. Either the exchanged case contributes with the same sign, or it contributes with the opposite sign. Both cases exist in nature, each for a different class of particles. Particles which interfere with a positive sign are called Bose particles and those which interfere with a negative sign are called Fermi particles. The Bose particles are the photon, the mesons, and the graviton. The Fermi particles are the electron, the muon, the neutrinos, the nucleons, and the baryons. We have, then, that the amplitude for the scattering of identical particles is:
Bose particles: \begin{equation} \label{Eq:III:4:1} (\text{Amplitude direct})+(\text{Amplitude exchanged}). \end{equation} Fermi particles: \begin{equation} \label{Eq:III:4:2} (\text{Amplitude direct})-(\text{Amplitude exchanged}). \end{equation} For particles with spin—like electrons—there is an additional complication. We must specify not only the location of the particles but the direction of their spins. It is only for identical particles with identical spin states that the amplitudes interfere when the particles are exchanged. If you think of the scattering of unpolarized beams—which are a mixture of different spin states—there is some extra arithmetic. Now an interesting problem arises when there are two or more particles bound tightly together. For example, an $\alpha$-particle has four particles in it—two neutrons and two protons. When two $\alpha$-particles scatter, there are several possibilities. It may be that during the scattering there is a certain amplitude that one of the neutrons will leap across from one $\alpha$-particle to the other, while a neutron from the other $\alpha$-particle leaps the other way so that the two alphas which come out of the scattering are not the original ones—there has been an exchange of a pair of neutrons. See Fig. 4–2. The amplitude for scattering with an exchange of a pair of neutrons will interfere with the amplitude for scattering with no such exchange, and the interference must be with a minus sign because there has been an exchange of one pair of Fermi particles. On the other hand, if the relative energy of the two $\alpha$-particles is so low that they stay fairly far apart—say, due to the Coulomb repulsion—and there is never any appreciable probability of exchanging any of the internal particles, we can consider the $\alpha$-particle as a simple object, and we do not need to worry about its internal details. In such circumstances, there are only two contributions to the scattering amplitude. Either there is no exchange, or all four of the nucleons are exchanged in the scattering. Since the protons and the neutrons in the $\alpha$-particle are all Fermi particles, an exchange of any pair reverses the sign of the scattering amplitude. So long as there are no internal changes in the $\alpha$-particles, interchanging the two $\alpha$-particles is the same as interchanging four pairs of Fermi particles. There is a change in sign for each pair, so the net result is that the amplitudes combine with a positive sign. The $\alpha$-particle behaves like a Bose particle. So the rule is that composite objects, in circumstances in which the composite object can be considered as a single object, behave like Fermi particles or Bose particles, depending on whether they contain an odd number or an even number of Fermi particles. All the elementary Fermi particles we have mentioned—such as the electron, the proton, the neutron, and so on—have a spin $j=1/2$. If several such Fermi particles are put together to form a composite object, the resulting spin may be either integral or half-integral. For example, the common isotope of helium, He$^4$, which has two neutrons and two protons, has a spin of zero, whereas Li$^7$, which has three protons and four neutrons, has a spin of $3/2$. We will learn later the rules for compounding angular momentum, and will just mention now that every composite object which has a half-integral spin imitates a Fermi particle, whereas every composite object with an integral spin imitates a Bose particle. This brings up an interesting question: Why is it that particles with half-integral spin are Fermi particles whose amplitudes add with the minus sign, whereas particles with integral spin are Bose particles whose amplitudes add with the positive sign? We apologize for the fact that we cannot give you an elementary explanation. An explanation has been worked out by Pauli from complicated arguments of quantum field theory and relativity. He has shown that the two must necessarily go together, but we have not been able to find a way of reproducing his arguments on an elementary level. It appears to be one of the few places in physics where there is a rule which can be stated very simply, but for which no one has found a simple and easy explanation. The explanation is deep down in relativistic quantum mechanics. This probably means that we do not have a complete understanding of the fundamental principle involved. For the moment, you will just have to take it as one of the rules of the world. |
|
3 | 4 | Identical Particles | 2 | States with two Bose particles | Now we would like to discuss an interesting consequence of the addition rule for Bose particles. It has to do with their behavior when there are several particles present. We begin by considering a situation in which two Bose particles are scattered from two different scatterers. We won’t worry about the details of the scattering mechanism. We are interested only in what happens to the scattered particles. Suppose we have the situation shown in Fig. 4–3. The particle $a$ is scattered into the state $1$. By a state we mean a given direction and energy, or some other given condition. The particle $b$ is scattered into the state $2$. We want to assume that the two states $1$ and $2$ are nearly the same. (What we really want to find out eventually is the amplitude that the two particles are scattered into identical directions, or states; but it is best if we think first about what happens if the states are almost the same and then work out what happens when they become identical.) Suppose that we had only particle $a$; then it would have a certain amplitude for scattering in direction $1$, say $\braket{1}{a}$. And particle $b$ alone would have the amplitude $\braket{2}{b}$ for landing in direction $2$. If the two particles are not identical, the amplitude for the two scatterings to occur at the same time is just the product \begin{equation*} \braket{1}{a}\braket{2}{b}. \end{equation*} The probability for such an event is then \begin{equation*} \abs{\braket{1}{a}\braket{2}{b}}^2, \end{equation*} which is also equal to \begin{equation*} \abs{\braket{1}{a}}^2\abs{\braket{2}{b}}^2. \end{equation*} To save writing for the present arguments, we will sometimes set \begin{equation*} \braket{1}{a}=a_1,\quad \braket{2}{b}=b_2. \end{equation*} Then the probability of the double scattering is \begin{equation*} \abs{a_1}^2\abs{b_2}^2. \end{equation*} It could also happen that particle $b$ is scattered into direction $1$, while particle $a$ goes into direction $2$. The amplitude for this process is \begin{equation*} \braket{2}{a}\braket{1}{b}, \end{equation*} and the probability of such an event is \begin{equation*} \abs{\braket{2}{a}\braket{1}{b}}^2= \abs{a_2}^2\abs{b_1}^2. \end{equation*} Imagine now that we have a pair of tiny counters that pick up the two scattered particles. The probability $P_2$ that they will pick up two particles together is just the sum \begin{equation} \label{Eq:III:4:3} P_2=\abs{a_1}^2\abs{b_2}^2+\abs{a_2}^2\abs{b_1}^2. \end{equation} Now let’s suppose that the directions $1$ and $2$ are very close together. We expect that $a$ should vary smoothly with direction, so $a_1$ and $a_2$ must approach each other as $1$ and $2$ get close together. If they are close enough, the amplitudes $a_1$ and $a_2$ will be equal. We can set $a_1=a_2$ and call them both just $a$; similarly, we set $b_1=b_2=b$. Then we get that \begin{equation} \label{Eq:III:4:4} P_2=2\abs{a}^2\abs{b}^2. \end{equation} Now suppose, however, that $a$ and $b$ are identical Bose particles. Then the process of $a$ going into $1$ and $b$ going into $2$ cannot be distinguished from the exchanged process in which $a$ goes into $2$ and $b$ goes into $1$. In this case the amplitudes for the two different processes can interfere. The total amplitude to obtain a particle in each of the two counters is \begin{equation} \label{Eq:III:4:5} \braket{1}{a}\braket{2}{b}+\braket{2}{a}\braket{1}{b}. \end{equation} And the probability that we get a pair is the absolute square of this amplitude, \begin{equation} \label{Eq:III:4:6} P_2=\abs{a_1b_2+a_2b_1}^2=4\abs{a}^2\abs{b}^2. \end{equation} We have the result that it is twice as likely to find two identical Bose particles scattered into the same state as you would calculate assuming the particles were different. Although we have been considering that the two particles are observed in separate counters, this is not essential—as we can see in the following way. Let’s imagine that both the directions $1$ and $2$ would bring the particles into a single small counter which is some distance away. We will let the direction $1$ be defined by saying that it heads toward the element of area $dS_1$ of the counter. Direction $2$ heads toward the surface element $dS_2$ of the counter. (We imagine that the counter presents a surface at right angles to the line from the scatterings.) Now we cannot give a probability that a particle will go into a precise direction or to a particular point in space. Such a thing is impossible—the chance for any exact direction is zero. When we want to be so specific, we shall have to define our amplitudes so that they give the probability of arriving per unit area of a counter. Suppose that we had only particle $a$; it would have a certain amplitude for scattering in direction $1$. Let’s define $\braket{1}{a}=a_1$ to be the amplitude that $a$ will scatter into a unit area of the counter in the direction $1$. In other words, the scale of $a_1$ is chosen—we say it is “normalized” so that the probability that it will scatter into an element of area $dS_1$ is \begin{equation} \label{Eq:III:4:7} \abs{\braket{1}{a}}^2\,dS_1=\abs{a_1}^2\,dS_1. \end{equation} If our counter has the total area $\Delta S$, and we let $dS_1$ range over this area, the total probability that the particle $a$ will be scattered into the counter is \begin{equation} \label{Eq:III:4:8} \int_{\Delta S}\abs{a_1}^2\,dS_1. \end{equation} As before, we want to assume that the counter is sufficiently small so that the amplitude $a_1$ doesn’t vary significantly over the surface of the counter; $a_1$ is then a constant amplitude which we can call $a$. Then the probability that particle $a$ is scattered somewhere into the counter is \begin{equation} \label{Eq:III:4:9} p_a=\abs{a}^2\,\Delta S. \end{equation} In the same way, we will have that the probability that particle $b$—when it is alone—scatters into some element of area, say $dS_2$, is \begin{equation*} \abs{b_2}^2\,dS_2. \end{equation*} (We use $dS_2$ instead of $dS_1$ because we will later want $a$ and $b$ to go into different directions.) Again we set $b_2$ equal to the constant amplitude $b$; then the probability that particle $b$ is counted in the detector is \begin{equation} \label{Eq:III:4:10} p_b=\abs{b}^2\,\Delta S. \end{equation} Now when both particles are present, the probability that $a$ is scattered into $dS_1$ and $b$ is scattered into $dS_2$ is \begin{equation} \label{Eq:III:4:11} \abs{a_1b_2}^2\,dS_1\,dS_2=\abs{a}^2\abs{b}^2\,dS_1\,dS_2. \end{equation} If we want the probability that both $a$ and $b$ get into the counter, we integrate both $dS_1$ and $dS_2$ over $\Delta S$ and find that \begin{equation} \label{Eq:III:4:12} P_2=\abs{a}^2\abs{b}^2\,(\Delta S)^2. \end{equation} We notice, incidentally, that this is just equal to $p_a\cdot p_b$, just as you would suppose assuming that the particles $a$ and $b$ act independently of each other. When the two particles are identical, however, there are two indistinguishable possibilities for each pair of surface elements $dS_1$ and $dS_2$. Particle $a$ going into $dS_2$ and particle $b$ going into $dS_1$ is indistinguishable from $a$ into $dS_1$ and $b$ into $dS_2$, so the amplitudes for these processes will interfere. (When we had two different particles above—although we did not in fact care which particle went where in the counter—we could, in principle, have found out; so there was no interference. For identical particles we cannot tell, even in principle.) We must write, then, that the probability that the two particles arrive at $dS_1$ and $dS_2$ is \begin{equation} \label{Eq:III:4:13} \abs{a_1b_2+a_2b_1}^2\,dS_1\,dS_2. \end{equation} Now, however, when we integrate over the area of the counter, we must be careful. If we let $dS_1$ and $dS_2$ range over the whole area $\Delta S$, we would count each part of the area twice since (4.13) contains everything that can happen with any pair of surface elements $dS_1$ and $dS_2$.1 We can still do the integral that way, if we correct for the double counting by dividing the result by $2$. We get then that $P_2$ for identical Bose particles is \begin{equation} \label{Eq:III:4:14} P_2(\text{Bose})=\tfrac{1}{2}\{4\abs{a}^2\abs{b}^2\,(\Delta S)^2\}= 2\abs{a}^2\abs{b}^2\,(\Delta S)^2. \end{equation}
\begin{align} P_2(\text{Bose})&=\tfrac{1}{2}\{4\abs{a}^2\abs{b}^2\,(\Delta S)^2\}\notag\\[1ex] \label{Eq:III:4:14} &=2\abs{a}^2\abs{b}^2\,(\Delta S)^2. \end{align} Again, this is just twice what we got in Eq. (4.12) for distinguishable particles. If we imagine for a moment that we knew that the $b$ channel had already sent its particle into the particular direction, we can say that the probability that a second particle will go into the same direction is twice as great as we would have expected if we had calculated it as an independent event. It is a property of Bose particles that if there is already one particle in a condition of some kind, the probability of getting a second one in the same condition is twice as great as it would be if the first one were not already there. This fact is often stated in the following way: If there is already one Bose particle in a given state, the amplitude for putting an identical one on top of it is $\sqrt{2}$ greater than if it weren’t there. (This is not a proper way of stating the result from the physical point of view we have taken, but if it is used consistently as a rule, it will, of course, give the correct result.) |
|
3 | 4 | Identical Particles | 3 | States with $\boldsymbol{n}$ Bose particles | Let’s extend our result to a situation in which there are $n$ particles present. We imagine the circumstance shown in Fig. 4–4. We have $n$ particles $a$, $b$, $c$, …, which are scattered and end up in the directions $1$, $2$, $3$, …, $n$. All $n$ directions are headed toward a small counter a long distance away. As in the last section, we choose to normalize all the amplitudes so that the probability that each particle acting alone would go into an element of surface $dS$ of the counter is \begin{equation*} \abs{\langle\quad\rangle}^2\,dS. \end{equation*} First, let’s assume that the particles are all distinguishable; then the probability that $n$ particles will be counted together in $n$ different surface elements is \begin{equation} \label{Eq:III:4:15} \abs{a_1b_2c_3\dotsm}^2\,dS_1\,dS_2\,dS_3\dotsm \end{equation} Again we take that the amplitudes don’t depend on where $dS$ is located in the counter (assumed small) and call them simply $a$, $b$, $c$, … The probability (4.15) becomes \begin{equation} \label{Eq:III:4:16} \abs{a}^2\abs{b}^2\abs{c}^2\dotsm dS_1\,dS_2\,dS_3\dotsm \end{equation} Integrating each $dS$ over the surface $\Delta S$ of the counter, we have that $P_n(\text{different})$, the probability of counting $n$ different particles at once, is \begin{equation} \label{Eq:III:4:17} P_n(\text{different})=\abs{a}^2\abs{b}^2\abs{c}^2\dotsm(\Delta S)^n. \end{equation} This is just the product of the probabilities for each particle to enter the counter separately. They all act independently—the probability for one to enter does not depend on how many others are also entering. Now suppose that all the particles are identical Bose particles. For each set of directions $1$, $2$, $3$, … there are many indistinguishable possibilities. If there were, for instance, just three particles, we would have the following possibilities: \begin{alignat*}{3} a\!&\to\!1&\quad a\!&\to\!1&\quad a\!&\to\!2\\ b\!&\to\!2&\quad b\!&\to\!3&\quad b\!&\to\!1\\ c\!&\to\!3&\quad c\!&\to\!2&\quad c\!&\to\!3\\[1ex] a\!&\to\!2&\quad a\!&\to\!3&\quad a\!&\to\!3\\ b\!&\to\!3&\quad b\!&\to\!1&\quad b\!&\to\!2\\ c\!&\to\!1&\quad c\!&\to\!2&\quad c\!&\to\!1 \end{alignat*} There are six different combinations. With $n$ particles, there are $n!$ different, but indistinguishable, possibilities for which we must add amplitudes. The probability that $n$ particles will be counted in $n$ surface elements is then \begin{align} \lvert a_1b_2c_3\dotsm&+a_1b_3c_2\dotsm+a_2b_1c_3\dotsm\notag\\ \label{Eq:III:4:18} &+a_2b_3c_1\dotsm+\text{etc.}+\text{etc.}\rvert^2 dS_1\,dS_2\,dS_3\dotsm dS_n. \end{align}
\begin{align} \lvert &a_1b_2c_3\dotsm+\,a_1b_3c_2\dotsm+a_2b_1c_3\dotsm\notag\\ \label{Eq:III:4:18} &+a_2b_3c_1\dotsm\!+\text{etc.}\!+\text{etc.}\rvert^2\, dS_1dS_2dS_3\!\dotsm dS_n. \end{align} Once more we assume that all the directions are so close that we can set $a_1=$ $a_2=$ $\dotsb=$ $a_n=$ $a$, and similarly for $b$, $c$, …; the probability of (4.18) becomes \begin{equation} \label{Eq:III:4:19} \abs{n!abc\dotsm}^2dS_1\,dS_2\dotsm dS_n. \end{equation} When we integrate each $dS$ over the area $\Delta S$ of the counter, each possible product of surface elements is counted $n!$ times; we correct for this by dividing by $n!$ and get \begin{equation} P_n(\text{Bose}) =\frac{1}{n!}\,\abs{n!abc\dotsm}^2(\Delta S)^n\notag \end{equation} or \begin{equation} \label{Eq:III:4:20} P_n(\text{Bose}) =n!\,\abs{abc\dotsm}^2(\Delta S)^n. \end{equation} Comparing this result with Eq. (4.17), we see that the probability of counting $n$ Bose particles together is $n!$ greater than we would calculate assuming that the particles were all distinguishable. We can summarize our result this way: \begin{equation} \label{Eq:III:4:21} P_n(\text{Bose})=n!\,P_n(\text{different}). \end{equation} Thus, the probability in the Bose case is larger by $n!$ than you would calculate assuming that the particles acted independently. We can see better what this means if we ask the following question: What is the probability that a Bose particle will go into a particular state when there are already $n$ others present? Let’s call the newly added particle $w$. If we have $(n+1)$ particles, including $w$, Eq. (4.20) becomes \begin{equation} \label{Eq:III:4:22} P_{n+1}(\text{Bose})=(n+1)!\,\abs{abc\dotsm w}^2(\Delta S)^{n+1}. \end{equation} We can write this as \begin{equation} P_{n+1}(\text{Bose})=\{(n+1)\abs{w}^2\,\Delta S\}n!\,\abs{abc\dotsm}^2(\Delta S)^n\notag \end{equation} or \begin{equation} \label{Eq:III:4:23} P_{n+1}(\text{Bose})=(n+1)\abs{w}^2\,\Delta S\,P_n(\text{Bose}). \end{equation} We can look at this result in the following way: The number $\abs{w}^2\,\Delta S$ is the probability for getting particle $w$ into the detector if no other particles were present; $P_n(\text{Bose})$ is the chance that there are already $n$ other Bose particles present. So Eq. (4.23) says that when there are $n$ other identical Bose particles present, the probability that one more particle will enter the same state is enhanced by the factor $(n+1)$. The probability of getting a boson, where there are already $n$, is $(n+1)$ times stronger than it would be if there were none before. The presence of the other particles increases the probability of getting one more. |
|
3 | 4 | Identical Particles | 4 | Emission and absorption of photons | Throughout our discussion we have talked about a process like the scattering of $\alpha$-particles. But that is not essential; we could have been speaking of the creation of particles, as for instance the emission of light. When the light is emitted, a photon is “created.” In such a case, we don’t need the incoming lines in Fig. 4–4; we can consider merely that there are some atoms emitting $n$ photons, as in Fig. 4–5. So our result can also be stated: The probability that an atom will emit a photon into a particular final state is increased by the factor $(n+1)$ if there are already $n$ photons in that state. People like to summarize this result by saying that the amplitude to emit a photon is increased by the factor $\sqrt{n+1}$ when there are already $n$ photons present. It is, of course, another way of saying the same thing if it is understood to mean that this amplitude is just to be squared to get the probability. It is generally true in quantum mechanics that the amplitude to get from any condition $\phi$ to any other condition $\chi$ is the complex conjugate of the amplitude to get from $\chi$ to $\phi$: \begin{equation} \label{Eq:III:4:24} \braket{\chi}{\phi}=\braket{\phi}{\chi}\cconj. \end{equation} We will learn about this law a little later, but for the moment we will just assume it is true. We can use it to find out how photons are scattered or absorbed out of a given state. We have that the amplitude that a photon will be added to some state, say $i$, when there are already $n$ photons present is, say, \begin{equation} \label{Eq:III:4:25} \braket{n+1}{n}=\sqrt{n+1}\,a, \end{equation} where $a=\braket{i}{a}$ is the amplitude when there are no others present. Using Eq. (4.24), the amplitude to go the other way—from $(n+1)$ photons to $n$—is \begin{equation} \label{Eq:III:4:26} \braket{n}{n+1}=\sqrt{n+1}\,a\cconj. \end{equation} This isn’t the way people usually say it; they don’t like to think of going from $(n+1)$ to $n$, but prefer always to start with $n$ photons present. Then they say that the amplitude to absorb a photon when there are $n$ present—in other words, to go from $n$ to $(n-1)$—is \begin{equation} \label{Eq:III:4:27} \braket{n-1}{n}=\sqrt{n}\,a\cconj. \end{equation} which is, of course, just the same as Eq. (4.26). Then they have trouble trying to remember when to use $\sqrt{n}$ or $\sqrt{n+1}$. Here’s the way to remember: The factor is always the square root of the largest number of photons present, whether it is before or after the reaction. Equations (4.25) and (4.26) show that the law is really symmetric—it only appears unsymmetric if you write it as Eq. (4.27). There are many physical consequences of these new rules; we want to describe one of them having to do with the emission of light. Suppose we imagine a situation in which photons are contained in a box—you can imagine a box with mirrors for walls. Now say that in the box we have $n$ photons, all of the same state—the same frequency, direction, and polarization—so they can’t be distinguished, and that also there is an atom in the box that can emit another photon into the same state. Then the probability that it will emit a photon is \begin{equation} \label{Eq:III:4:28} (n+1)\abs{a}^2, \end{equation} and the probability that it will absorb a photon is \begin{equation} \label{Eq:III:4:29} n\abs{a}^2, \end{equation} where $\abs{a}^2$ is the probability it would emit if no photons were present. We have already discussed these rules in a somewhat different way in Chapter 42 of Vol. I. Equation (4.29) says that the probability that an atom will absorb a photon and make a transition to a higher energy state is proportional to the intensity of the light shining on it. But, as Einstein first pointed out, the rate at which an atom will make a transition downward has two parts. There is the probability that it will make a spontaneous transition $\abs{a}^2$, plus the probability of an induced transition $n\abs{a}^2$, which is proportional to the intensity of the light—that is, to the number of photons present. Furthermore, as Einstein said, the coefficients of absorption and of induced emission are equal and are related to the probability of spontaneous emission. What we learn here is that if the light intensity is measured in terms of the number of photons present (instead of as the energy per unit area, and per sec), the coefficients of absorption, of induced emission, and of spontaneous emission are all equal. This is the content of the relation between the Einstein coefficients $A$ and $B$ of Chapter 42, Vol. I, Eq. (42.18). |
|
3 | 5 | Spin One | 1 | Filtering atoms with a Stern-Gerlach apparatus | In this chapter we really begin the quantum mechanics proper—in the sense that we are going to describe a quantum mechanical phenomenon in a completely quantum mechanical way. We will make no apologies and no attempt to find connections to classical mechanics. We want to talk about something new in a new language. The particular situation which we are going to describe is the behavior of the so-called quantization of the angular momentum, for a particle of spin one. But we won’t use words like “angular momentum” or other concepts of classical mechanics until later. We have chosen this particular example because it is relatively simple, although not the simplest possible example. It is sufficiently complicated that it can stand as a prototype which can be generalized for the description of all quantum mechanical phenomena. Thus, although we are dealing with a particular example, all the laws which we mention are immediately generalizable, and we will give the generalizations so that you will see the general characteristics of a quantum mechanical description. We begin with the phenomenon of the splitting of a beam of atoms into three separate beams in a Stern-Gerlach experiment. You remember that if we have an inhomogeneous magnetic field made by a magnet with a pointed pole tip and we send a beam through the apparatus, the beam of particles may be split into a number of beams—the number depending on the particular kind of atom and its state. We are going to take the case of an atom which gives three beams, and we are going to call that a particle of spin one. You can do for yourself the case of five beams, seven beams, two beams, etc.—you just copy everything down and where we have three terms, you will have five terms, seven terms, and so on. Imagine the apparatus drawn schematically in Fig. 5–1. A beam of atoms (or particles of any kind) is collimated by some slits and passes through a nonuniform field. Let’s say that the beam moves in the $y$-direction and that the magnetic field and its gradient are both in the $z$-direction. Then, looking from the side, we will see the beam split vertically into three beams, as shown in the figure. Now at the output end of the magnet we could put small counters which count the rate of arrival of particles in any one of the three beams. Or we can block off two of the beams and let the third one go on. Suppose we block off the lower two beams and let the top-most beam go on and enter a second Stern-Gerlach apparatus of the same kind, as shown in Fig. 5–2. What happens? There are not three beams in the second apparatus; there is only the top beam.1 This is what you would expect if you think of the second apparatus as simply an extension of the first. Those atoms which are being pushed upward continue to be pushed upward in the second magnet. You can see then that the first apparatus has produced a beam of “purified” objects—atoms that get bent upward in the particular inhomogeneous field. The atoms, as they enter the original Stern-Gerlach apparatus, are of three “varieties,” and the three kinds take different trajectories. By filtering out all but one of the varieties, we can make a beam whose future behavior in the same kind of apparatus is determined and predictable. We will call this a filtered beam, or a polarized beam, or a beam in which the atoms all are known to be in a definite state. For the rest of our discussion, it will be more convenient if we consider a somewhat modified apparatus of the Stern-Gerlach type. The apparatus looks more complicated at first, but it will make all the arguments simpler. Anyway, since they are only “thought experiments,” it doesn’t cost anything to complicate the equipment. (Incidentally, no one has ever done all of the experiments we will describe in just this way, but we know what would happen from the laws of quantum mechanics, which are, of course, based on other similar experiments. These other experiments are harder to understand at the beginning, so we want to describe some idealized—but possible—experiments.) Figure 5–3(a) shows a drawing of the “modified Stern-Gerlach apparatus” we would like to use. It consists of a sequence of three high-gradient magnets. The first one (on the left) is just the usual Stern-Gerlach magnet and splits the incoming beam of spin-one particles into three separate beams. The second magnet has the same cross section as the first, but is twice as long and the polarity of its magnetic field is opposite the field in magnet $1$. The second magnet pushes in the opposite direction on the atomic magnets and bends their paths back toward the axis, as shown in the trajectories drawn in the lower part of the figure. The third magnet is just like the first, and brings the three beams back together again, so that leaves the exit hole along the axis. Finally, we would like to imagine that in front of the hole at $A$ there is some mechanism which can get the atoms started from rest and that after the exit hole at $B$ there is a decelerating mechanism that brings the atoms back to rest at $B$. That is not essential, but it will mean that in our analysis we won’t have to worry about including any effects of the motion as the atoms come out, and can concentrate on those matters having only to do with the spin. The whole purpose of the “improved” apparatus is just to bring all the particles to the same place, and with zero speed. Now if we want to do an experiment like the one in Fig. 5–2, we can first make a filtered beam by putting a plate in the middle of the apparatus that blocks two of the beams, as shown in Fig. 5–4. If we now put the polarized atoms through a second identical apparatus, all the atoms will take the upper path, as can be verified by putting similar plates in the way of the various beams of the second $S$ filter and seeing whether particles get through. Suppose we call the first apparatus by the name $S$. (We are going to consider all sorts of combinations, and we will need labels to keep things straight.) We will say that the atoms which take the top path in $S$ are in the “plus state with respect to $S$”; the ones which take the middle path are in the “zero state with respect to $S$”; and the ones which take the lowest path are in the “minus state with respect to $S$.” (In the more usual language we would say that the $z$-component of the angular momentum was $+1\hbar$, $0$, and $-1\hbar$, but we are not using that language now.) Now in Fig. 5–4 the second apparatus is oriented just like the first, so the filtered atoms will all go on the upper path. Or if we had blocked off the upper and lower beams in the first apparatus and let only the zero state through, all the filtered atoms would go through the middle path of the second apparatus. And if we had blocked off all but the lowest beam in the first, there would be only a low beam in the second. We can say that in each case our first apparatus has produced a filtered beam in a pure state with respect to $S$ ($+$, $0$, or $-$), and we can test which state is present by putting the atoms through a second, identical apparatus. We can make our second apparatus so that it transmits only atoms of a particular state—by putting masks inside it as we did for the first one—and then we can test the state of the incoming beam just by seeing whether anything comes out the far end. For instance, if we block off the two lower paths in the second apparatus, $100$ percent of the atoms will still come through; but if we block off the upper path, nothing will get through. To make this kind of discussion easier, we are going to invent a shorthand symbol to represent one of our improved Stern-Gerlach apparatuses. We will let the symbol \begin{equation} \label{Eq:III:5:1} \SG{S} \end{equation} stand for one complete apparatus. (This is not a symbol you will ever find used in quantum mechanics; we’ve just invented it for this chapter. It is simply meant to be a shorthand picture of the apparatus of Fig. 5–3.) Since we are going to want to use several apparatuses at once, and with various orientations, we will identify each with a letter underneath. So the symbol in (5.1) stands for the apparatus $S$. When we block off one or more of the beams inside, we will show that by some vertical bars indicating which beam is blocked, like this: \begin{equation} \label{Eq:III:5:2} \SGP{S}\!\!\!\cdotp \end{equation} The various possible combinations we will be using are shown in Fig. 5–5. If we have two filters in succession (as in Fig. 5–4), we will put the two symbols next to each other, like this: \begin{equation} \label{Eq:III:5:3} \SGP{S} \!\! \SG{S} \!\!\!\cdotp \end{equation} For this setup, everything that comes through the first also gets through the second. In fact, even if we block off the “zero” and “minus” channels of the second apparatus, so that we have \begin{equation} \label{Eq:III:5:4} \SGP{S} \!\! \SGP{S} \!\!\!, \end{equation} we still get $100$ percent transmission through the second apparatus. On the other hand, if we have \begin{equation} \label{Eq:III:5:5} \SGP{S} \!\! \SGM{S} \!\!\!, \end{equation} nothing at all comes out of the far end. Similarly, \begin{equation} \label{Eq:III:5:6} \SGM{S} \!\! \SGP{S} \end{equation} would give nothing out. On the other hand, \begin{equation} \label{Eq:III:5:7} \begin{array}{cc} \SGM{S} \!\! \SGM{S} \end{array} \end{equation} would be just equivalent to \begin{equation*} \SGM{S} \end{equation*} by itself. Now we want to describe these experiments quantum mechanically. We will say that an atom is in the $(+S)$ state if it has gone through the apparatus of Fig. 5–5(b), that it is in a $(\OS)$ state if it has gone through (c), and in a $(-S)$ state if it has gone through (d).2 Then we let $\braket{b}{a}$ be the amplitude that an atom which is in state $a$ will get through an apparatus into the $b$ state. We can say: $\braket{b}{a}$ is the amplitude for an atom in the state $a$ to get into the state $b$. The experiment (5.4) gives us that \begin{equation*} \braket{+S}{+S}=1, \end{equation*} whereas (5.5) gives us \begin{equation*} \braket{-S}{+S}=0. \end{equation*} Similarly, the result of (5.6) is \begin{equation*} \braket{+S}{-S}=0, \end{equation*} and of (5.7) is \begin{equation*} \braket{-S}{-S}=1. \end{equation*} As long as we deal only with “pure” states—that is, we have only one channel open—there are nine such amplitudes, and we can write them in a table: \begin{equation} \label{Eq:III:5:8} \text{to}\, \begin{array}{c} \begin{array}{c} \text{from} \end{array} \\[-.5ex] \begin{array}{c|ccc} &+S & 0S & -S \\ \hline +S&1 & 0 & 0\\ 0S&0 & 1 & 0\\ -S&0 & 0 & 1 \end{array} \end{array} \end{equation} This array of nine numbers—called a matrix—summarizes the phenomena we’ve been describing. |
|
3 | 5 | Spin One | 2 | Experiments with filtered atoms | Now comes the big question: What happens if the second apparatus is tipped to a different angle, so that its field axis is no longer parallel to the first? It could be not only tipped, but also pointed in a different direction—for instance, it could take the beam off at $90^\circ$ with respect to the original direction. To take it easy at first, let’s first think about an arrangement in which the second Stern-Gerlach experiment is tilted by some angle $\alpha$ about the $y$-axis, as shown in Fig. 5–6. We’ll call the second apparatus $T$. Suppose that we now set up the following experiment: \begin{equation*} \SGP{S} \!\! \SGP{T} \!\!\!\!, \end{equation*} or the experiment: \begin{equation*} \SGP{S} \!\! \SGZ{T} \!\!\!\!\cdotp \end{equation*} What comes out at the far end in these cases? The answer is this: If the atoms are in a definite state with respect to $S$, they are not in the same state with respect to $T$—a $(+S)$ state is not also a $(+T)$ state. There is, however, a certain amplitude to find the atom in a $(+T)$ state—or a $(\OT)$ state or a $(-T)$ state. In other words, as careful as we have been to make sure that we have the atoms in a definite condition, the fact of the matter is that if it goes through an apparatus which is tilted at a different angle it has, so to speak, to “reorient” itself—which it does, don’t forget, by luck. We can put only one particle through at a time, and then we can only ask the question: What is the probability that it gets through? Some of the atoms that have gone through $S$ will end in a $(+T)$ state, some of them will end in a $(\OT)$, and some in a $(-T)$ state—all with different odds. These odds can be calculated by the absolute squares of complex amplitudes; what we want is some mathematical method, or quantum mechanical description, for these amplitudes. What we need to know are various quantities like \begin{equation*} \braket{-T}{+S}, \end{equation*} by which we mean the amplitude that an atom initially in the $(+S)$ state can get into the $(-T)$ condition (which is not zero unless $T$ and $S$ are lined up parallel to each other). There are other amplitudes like \begin{equation*} \braket{+T}{\OS},\quad \text{or}\quad \braket{\OT}{-S},\quad \text{etc}. \end{equation*} There are, in fact, nine such amplitudes—another matrix—that a theory of particles should tell us how to calculate. Just as $F=ma$ tells us how to calculate what happens to a classical particle in any circumstance, the laws of quantum mechanics permit us to determine the amplitude that a particle will get through a particular apparatus. The central problem, then, is to be able to calculate—for any given tilt angle $\alpha$, or in fact for any orientation whatever—the nine amplitudes: \begin{equation} \begin{alignedat}{3} \braket{+T}{+S}&,&\quad \braket{+T}{\OS}&,&\quad \braket{+T}{-S}&,\\ \braket{\OT}{+S}&,&\quad \braket{\OT}{\OS}&,&\quad \braket{\OT}{-S}&,\\ \braket{-T}{+S}&,&\quad \braket{-T}{\OS}&,&\quad \braket{-T}{-S}&. \end{alignedat} \label{Eq:III:5:9} \end{equation} We can already figure out some relations among these amplitudes. First, according to our definitions, the absolute square \begin{equation*} \abs{\braket{+T}{+S}}^2 \end{equation*} is the probability that an atom in a $(+S)$ state will enter a $(+T)$ state. We will often find it more convenient to write such squares in the equivalent form \begin{equation*} \braket{+T}{+S}\braket{+T}{+S}\cconj. \end{equation*} In the same notation the number \begin{equation*} \braket{\OT}{+S}\braket{\OT}{+S}\cconj \end{equation*} is the probability that a particle in the $(+S)$ state will enter the $(\OT)$ state, and \begin{equation*} \braket{-T}{+S}\braket{-T}{+S}\cconj \end{equation*} is the probability that it will enter the $(-T)$ state. But the way our apparatuses are made, every atom which enters the $T$ apparatus must be found in some one of the three states of the $T$ apparatus—there’s nowhere else for a given kind of atom to go. So the sum of the three probabilities we’ve just written must be equal to $100$ percent. We have the relation \begin{equation} \begin{alignedat}{3} &\braket{+T}{+S}&\braket{+T}{+S}&\cconj\;&+&\\[.2ex] &\braket{\OT}{+S}&\,\braket{\OT}{+S}&\cconj&+&\\[.2ex] &\braket{-T}{+S}&\braket{-T}{+S}&\cconj&=&\;1. \end{alignedat} \label{Eq:III:5:10} \end{equation} There are, of course, two other such equations that we get if we start with a $(\OS)$ or a $(-S)$ state. But they are all we can easily get, so we’ll go on to some other general questions. |
|
3 | 5 | Spin One | 3 | Stern-Gerlach filters in series | Here is an interesting question: Suppose we had atoms filtered into the $(+S)$ state, then we put them through a second filter, say into a $(\OT)$ state, and then through another $+S$ filter. (We’ll call the last filter $S'$ just so we can distinguish it from the first $S$-filter.) Do the atoms remember that they were once in a $(+S)$ state? In other words, we have the following experiment: \begin{equation} \label{Eq:III:5:11} \SGP{S} \!\! \SGZ{T} \!\! \SGP{S'}\!\!\!\!\cdotp \end{equation} We want to know whether all those that get through $T$ also get through $S'$. They do not. Once they have been filtered by $T$, they do not remember in any way that they were in a $(+S)$ state when they entered $T$. Note that the second $S$ apparatus in (5.11) is oriented exactly the same as the first, so it is still an $S$-type filter. The states filtered by $S'$ are, of course, still $(+S)$, $(\OS)$, and $(-S)$. The important point is this: If the $T$ filter passes only one beam, the fraction that gets through the second $S$ filter depends only on the setup of the $T$ filter, and is completely independent of what precedes it. The fact that the same atoms were once sorted by an $S$ filter has no influence whatever on what they will do once they have been sorted again into a pure beam by a $T$ apparatus. From then on, the probability for getting into different states is the same no matter what happened before they got into the $T$ apparatus. As an example, let’s compare the experiment of (5.11) with the following experiment: \begin{equation} \label{Eq:III:5:12} \SGZ{S} \!\! \SGZ{T} \!\! \SGP{S'} \end{equation} in which only the first $S$ is changed. Let’s say that the angle $\alpha$ (between $S$ and $T$) is such that in experiment (5.11) one-third of the atoms that get through $T$ also get through $S'$. In experiment (5.12), although there will, in general, be a different number of atoms coming through $T$, the same fraction of these—one-third—will also get through $S'$. We can, in fact, show from what you have learned earlier that the fraction of the atoms that come out of $T$ and get through any particular $S'$ depends only on $T$ and $S'$, not on anything that happened earlier. Let’s compare experiment (5.12) with \begin{equation} \label{Eq:III:5:13} \SGZ{S} \!\! \SGZ{T} \!\! \SGZ{S'}\!\!\!\!. \end{equation} The amplitude that an atom that comes out of $S$ will also get through both $T$ and $S'$ is, for the experiments of (5.12), \begin{equation*} \braket{+S}{\OT}\braket{\OT}{\OS}. \end{equation*} The corresponding probability is \begin{equation*} \abs{\braket{+S}{\OT}\braket{\OT}{\OS}}^2 =\abs{\braket{+S}{\OT}}^2\abs{\braket{\OT}{\OS}}^2. \end{equation*} The probability for experiment (5.13) is \begin{equation*} \abs{\braket{\OS}{\OT}\braket{\OT}{\OS}}^2 =\abs{\braket{\OS}{\OT}}^2\abs{\braket{\OT}{\OS}}^2. \end{equation*} The ratio is \begin{equation*} \frac{\abs{\braket{\OS}{\OT}}^2} {\abs{\braket{+S}{\OT}}^2}. \end{equation*} and depends only on $T$ and $S'$, and not at all on which beam $(+S)$, $(\OS)$, or $(-S)$ is selected by $S$. (The absolute numbers may go up and down together depending on how much gets through $T$.) We would, of course, find the same result if we compared the probabilities that the atoms would go into the plus or the minus states with respect to $S'$, or the ratio of the probabilities to go into the zero or minus states. In fact, since these ratios depend only on which beam is allowed to pass through $T$, and not on the selection made by the first $S$ filter, it is clear that we would get the same result even if the last apparatus were not an $S$ filter. If we use for the third apparatus—which we will now call $R$—one rotated by some arbitrary angle with respect to $T$, we would find that a ratio such as $\abs{\braket{\OR}{\OT}}^2/\abs{\braket{+R}{\OT}}^2$ was independent of which beam was passed by the first filter $S$. |
|
3 | 5 | Spin One | 4 | Base states | These results illustrate one of the basic principles of quantum mechanics: Any atomic system can be separated by a filtering process into a certain set of what we will call base states, and the future behavior of the atoms in any single given base state depends only on the nature of the base state—it is independent of any previous history.3 The base states depend, of course, on the filter used; for instance, the three states $(+T)$, $(\OT)$, and $(-T)$ are one set of base states; the three states $(+S)$, $(\OS)$, and $(-S)$ are another. There are any number of possibilities each as good as any other. We should be careful to say that we are considering good filters which do indeed produce “pure” beams. If, for instance, our Stern-Gerlach apparatus didn’t produce a good separation of the three beams so that we could not separate them cleanly by our masks, then we could not make a complete separation into base states. We can tell if we have pure base states by seeing whether or not the beams can be split again in another filter of the same kind. If we have a pure $(+T)$ state, for instance, all the atoms will go through \begin{equation*} \SGP{T}\!\!\!\!, \end{equation*} and none will go through \begin{equation*} \SGZ{T}\!\!\!\!, \end{equation*} or through \begin{equation*} \SGM{T}\!\!\!\!\cdotp \end{equation*} Our statement about base states means that it is possible to filter to some pure state, so that no further filtering by an identical apparatus is possible. We must also point out that what we are saying is exactly true only in rather idealized situations. In any real Stern-Gerlach apparatus, we would have to worry about diffraction by the slits that could cause some atoms to go into states corresponding to different angles, or about whether the beams might contain atoms with different excitations of their internal states, and so on. We have idealized the situation so that we are talking only about the states that are split in a magnetic field; we are ignoring things having to do with position, momentum, internal excitations, and the like. In general, one would need to consider also base states which are sorted out with respect to such things also. But to keep the concepts simple, we are considering only our set of three states, which is sufficient for the exact treatment of the idealized situation in which the atoms don’t get torn up in going through the apparatus, or otherwise badly treated, and come to rest when they leave the apparatus. You will note that we always begin our thought experiments by taking a filter with only one channel open, so that we start with some definite base state. We do this because atoms come out of a furnace in various states determined at random by the accidental happenings inside the furnace. (It gives what is called an “unpolarized” beam.) This randomness involves probabilities of the “classical” kind—as in coin tossing—which are different from the quantum mechanical probabilities we are worrying about now. Dealing with an unpolarized beam would get us into additional complications that are better to avoid until after we understand the behavior of polarized beams. So don’t try to consider at this point what happens if the first apparatus lets more than one beam through. (We will tell you how you can handle such cases at the end of the chapter.) Let’s now go back and see what happens when we go from a base state for one filter to a base state for a different filter. Suppose we start again with \begin{equation*} \SGP{S} \!\! \SGZ{T}\!\!\!\!\cdotp \end{equation*} The atoms which come out of $T$ are in the base state $(\OT)$ and have no memory that they were once in the state $(+S)$. Some people would say that in the filtering by $T$ we have “lost the information” about the previous state $(+S)$ because we have “disturbed” the atoms when we separated them into three beams in the apparatus $T$. But that is not true. The past information is not lost by the separation into three beams, but by the blocking masks that are put in—as we can see by the following set of experiments. We start with a $+S$ filter and will call $N$ the number of atoms that come through it. If we follow this by a $\OT$ filter, the number of atoms that come out is some fraction of the original number, say $\alpha N$. If we then put another $+S$ filter, only some fraction $\beta$ of these atoms will get to the far end. We can indicate this in the following way: \begin{equation} \label{Eq:III:5:14} \SGP{S} \kern{-12pt} \raise{7pt} {\overset{N}{\longrightarrow}}\! \SGZ{T} \kern{-12pt} \raise{7pt} {\overset{\alpha N}{\longrightarrow}}\! \SGP{S'} \kern{-12pt} \raise{7pt} {\overset{\beta \alpha N}{\longrightarrow .}} \end{equation} If our third apparatus $S'$ selected a different state, say the $(\OS)$ state, a different fraction, say $\gamma$, would get through.4 We would have \begin{equation} \label{Eq:III:5:15} \SGP{S} \kern{-12pt} \raise{7pt} {\overset{N}{\longrightarrow}}\! \SGZ{T} \kern{-12pt} \raise{7pt} {\overset{\alpha N}{\longrightarrow}}\! \SGZ{S'} \kern{-12pt} \raise{7pt} {\overset{\gamma \alpha N}{\longrightarrow .}} \end{equation} Now suppose we repeat these two experiments but remove all the masks from $T$. We would then find the remarkable results as follows: \begin{equation} \label{Eq:III:5:16} \SGP{S} \kern{-12pt} \raise{7pt} {\overset{N}{\longrightarrow}}\! \SG{T} \kern{-12pt} \raise{7pt} {\overset{N}{\longrightarrow}}\! \SGP{S'} \kern{-12pt} \raise{7pt} {\overset{N}{\longrightarrow ,}} \end{equation} \begin{equation} \label{Eq:III:5:17} \SGP{S} \kern{-12pt} \raise{7pt} {\overset{N}{\longrightarrow}}\! \SG{T} \kern{-12pt} \raise{7pt} {\overset{N}{\longrightarrow}}\! \SGZ{S'} \kern{-12pt} \raise{7pt} {\overset{0}{\longrightarrow .}} \end{equation} All the atoms get through $S'$ in the first case, but none in the second case! This is one of the great laws of quantum mechanics. That nature works this way is not self-evident, but the results we have given correspond for our idealized situation to the quantum mechanical behavior observed in innumerable experiments. |
|
3 | 5 | Spin One | 5 | Interfering amplitudes | How can it be that in going from (5.15) to (5.17)—by opening more channels—we let fewer atoms through? This is the old, deep mystery of quantum mechanics—the interference of amplitudes. It’s the same kind of thing we first saw in the two-slit interference experiment with electrons. We saw that we could get fewer electrons at some places with both slits open than we got with one slit open. It works quantitatively this way. We can write the amplitude that an atom will get through $T$ and $S'$ in the apparatus of (5.17) as the sum of three amplitudes, one for each of the three beams in $T$; the sum is equal to zero: \begin{equation} \begin{alignedat}{3} &\braket{\OS}{+T}&\braket{+T}{+S}&\;&+&\\[.2ex] &\braket{\OS}{\OT}&\,\braket{\OT}{+S}&&+&\\[.2ex] &\braket{\OS}{-T}&\braket{-T}{+S}&&=&\;0. \end{alignedat} \label{Eq:III:5:18} \end{equation} None of the three individual amplitudes is zero—for example, the absolute square of the second amplitude is $\gamma\alpha$, see (5.15)—but the sum is zero. We would have also the same answer if $S'$ were set to select the $(-S)$ state. However, in the setup of (5.16), the answer is different. If we call $a$ the amplitude to get through $T$ and $S'$, in this case we have5 \begin{equation} \begin{alignedat}{3} a\;=\;&\braket{+S}{+T}&\braket{+T}{+S}&\;&+&\\[.2ex] &\braket{+S}{\OT}&\,\braket{\OT}{+S}&&+&\\[.2ex] &\braket{+S}{-T}&\braket{-T}{+S}&&=&\;1. \end{alignedat} \label{Eq:III:5:19} \end{equation} In the experiment (5.16) the beam has been split and recombined. Humpty Dumpty has been put back together again. The information about the original $(+S)$ state is retained—it is just as though the $T$ apparatus were not there at all. This is true whatever is put after the “wide-open” $T$ apparatus. We could follow it with an $R$ filter—a filter at some odd angle—or anything we want. The answer will always be the same as if the atoms were taken directly from the first $S$ filter. So this is the important principle: A $T$ filter—or any filter—with wide-open masks produces no change at all. We should make one additional condition. The wide-open filter must not only transmit all three beams, but it must also not produce unequal disturbances on the three beams. For instance, it should not have a strong electric field near one beam and not the others. The reason is that even if this extra disturbance would still let all the atoms through the filter, it could change the phases of some of the amplitudes. Then the interference would be changed, and the amplitudes in Eqs. (5.18) and (5.19) would be different. We will always assume that there are no such extra disturbances. Let’s rewrite Eqs. (5.18) and (5.19) in an improved notation. We will let $i$ stand for any one of the three states $(+T)$, $(\OT)$, or $(-T)$; then the equations can be written: \begin{equation} \label{Eq:III:5:20} \sum_{\text{all $i$}}\braket{\OS}{i}\braket{i}{+S}=0 \end{equation} and \begin{equation} \label{Eq:III:5:21} \sum_{\text{all $i$}}\braket{+S}{i}\braket{i}{+S}=1. \end{equation} Similarly, for an experiment where $S'$ is replaced by a completely arbitrary filter $R$, we have \begin{equation} \label{Eq:III:5:22} \SGP{S} \!\! \SG{T} \!\! \SGP{R} \!\!\!\!\cdotp \end{equation} The results will always be the same as if the $T$ apparatus were left out and we had only \begin{equation*} \SGP{S} \!\! \SGP{R} \!\!\!\!\cdotp \end{equation*} Or, expressed mathematically, \begin{equation} \label{Eq:III:5:23} \sum_{\text{all $i$}}\braket{+R}{i}\braket{i}{+S}= \braket{+R}{+S}. \end{equation} This is our fundamental law, and it is generally true so long as $i$ stands for the three base states of any filter. You will notice that in the experiment (5.22) there is no special relation of $S$ and $R$ to $T$. Furthermore, the arguments would be the same no matter what states they selected. To write the equation in a general way, without having to refer to the specific states selected by $S$ and $R$, let’s call $\phi$ (“phi”) the state prepared by the first filter (in our special example, $+S$) and $\chi$ (“khi”) the state tested by the final filter (in our example, $+R$). Then we can state our fundamental law of Eq. (5.23) in the form \begin{equation} \label{Eq:III:5:24} \braket{\chi}{\phi}= \sum_{\text{all $i$}}\braket{\chi}{i}\braket{i}{\phi}, \end{equation} where $i$ is to range over the three base states of some particular filter. We want to emphasize again what we mean by base states. They are like the three states which can be selected by one of our Stern-Gerlach apparatuses. One condition is that if you have a base state, then the future is independent of the past. Another condition is that if you have a complete set of base states, Eq. (5.24) is true for any set of beginning and ending states $\phi$ and $\chi$. There is, however, no unique set of base states. We began by considering base states with respect to a particular apparatus $T$. We could equally well consider a different set of base states with respect to an apparatus $S$, or with respect to $R$, etc.6 We usually speak of the base states “in a certain representation.” Another condition on a set of base states in any particular representation is that they are all completely different. By that we mean that if we have a $(+T)$ state, there is no amplitude for it to go into a $(\OT)$ or a $(-T)$ state. If we let $i$ and $j$ stand for any two base states of a particular set, the general rules discussed in connection with (5.8) are that \begin{equation*} \braket{j}{i}=0 \end{equation*} for all $i$ and $j$ that are not equal. Of course, we know that \begin{equation*} \braket{i}{i}=1. \end{equation*} These two equations are usually written as \begin{equation} \label{Eq:III:5:25} \braket{j}{i}=\delta_{ji}, \end{equation} where $\delta_{ji}$ (the “Kronecker delta”) is a symbol that is defined to be zero for $i\neq j$, and to be one for $i=j$. Equation (5.25) is not independent of the other laws we have mentioned. It happens that we are not particularly interested in the mathematical problem of finding the minimum set of independent axioms that will give all the laws as consequences.7 We are satisfied if we have a set that is complete and not apparently inconsistent. We can, however, show that Eqs. (5.25) and (5.24) are not independent. Suppose we let $\phi$ in Eq. (5.24) represent one of the base states of the same set as $i$, say the $j$th state; then we have \begin{equation*} \braket{\chi}{j}= \sum_i\braket{\chi}{i}\braket{i}{j}. \end{equation*} But Eq. (5.25) says that $\braket{i}{j}$ is zero unless $i=j$, so the sum becomes just $\braket{\chi}{j}$ and we have an identity, which shows that the two laws are not independent. We can see that there must be another relation among the amplitudes if both Eqs. (5.10) and (5.24) are true. Equation (5.10) is \begin{alignat*}{3} &\braket{+T}{+S}&\braket{+T}{+S}&\cconj\;&+&\\[.2ex] &\braket{\OT}{+S}&\,\braket{\OT}{+S}&\cconj&+&\\[.2ex] &\braket{-T}{+S}&\braket{-T}{+S}&\cconj&=&\;1. \end{alignat*} If we write Eq. (5.24), letting both $\phi$ and $\chi$ be the state $(+S)$, the left-hand side is $\braket{+S}{+S}$, which is clearly${}=1$; so we get once more Eq. (5.19), \begin{alignat*}{3} &\braket{+S}{+T}&\braket{+T}{+S}&\;&+&\\[.2ex] &\braket{+S}{\OT}&\,\braket{\OT}{+S}&&+&\\[.2ex] &\braket{+S}{-T}&\braket{-T}{+S}&&=&\;1. \end{alignat*} These two equations are consistent (for all relative orientations of the $T$ and $S$ apparatuses) only if \begin{align*} \braket{+S}{+T}&=\braket{+T}{+S}\cconj,\\[1ex] \braket{+S}{\OT}&=\braket{\OT}{+S}\cconj,\\[1ex] \braket{+S}{-T}&=\braket{-T}{+S}\cconj. \end{align*} And it follows that for any states $\phi$ and $\chi$, \begin{equation} \label{Eq:III:5:26} \braket{\phi}{\chi}=\braket{\chi}{\phi}\cconj. \end{equation} If this were not true, probability wouldn’t be “conserved,” and particles would get “lost.” Before going on, we want to summarize the three important general laws about amplitudes. They are Eqs. (5.24), (5.25), and (5.26): \begin{equation} \begin{alignedat}{2} &\text{$\phantom{II}$I}&\quad \braket{j}{i}&=\delta_{ji},\\[1ex] &\text{$\phantom{I}$II}&\quad \braket{\chi}{\phi}&= \sum_{\text{all $i$}} \braket{\chi}{i}\braket{i}{\phi},\\[1ex] &\text{III}&\quad \braket{\phi}{\chi}&= \braket{\chi}{\phi}\cconj. \end{alignedat} \label{Eq:III:5:27} \end{equation} In these equations the $i$ and $j$ refer to all the base states of some one representation, while $\phi$ and $\chi$ represent any possible states of the atom. It is important to note that II is valid only if the sum is carried out over all the base states of the system (in our case, three: $+T$, $\OT$, $-T$). These laws say nothing about what we should choose for a base for our set of base states. We began by using a $T$ apparatus, which is a Stern-Gerlach experiment with some arbitrary orientation; but any other orientation, say $W$, would be just as good. We would have a different set of states to use for $i$ and $j$, but all the laws would still be good—there is no unique set. One of the great games of quantum mechanics is to make use of the fact that things can be calculated in more than one way. |
|
3 | 6 | Spin One-Half | 1 | Transforming amplitudes | In the last chapter, using a system of spin one as an example, we outlined the general principles of quantum mechanics: Any state $\psi$ can be described in terms of a set of base states by giving the amplitudes to be in each of the base states. The amplitude to go from any state to another can, in general, be written as a sum of products, each product being the amplitude to go into one of the base states times the amplitude to go from that base state to the final condition, with the sum including a term for each base state: \begin{equation} \label{Eq:III:6:1} \braket{\chi}{\psi}=\sum_i\braket{\chi}{i}\braket{i}{\psi}. \end{equation} The base states are orthogonal—the amplitude to be in one if you are in the other is zero: \begin{equation} \label{Eq:III:6:2} \braket{i}{j}=\delta_{ij}. \end{equation} The amplitude to get from one state to another directly is the complex conjugate of the reverse: \begin{equation} \label{Eq:III:6:3} \braket{\chi}{\psi}\cconj=\braket{\psi}{\chi}. \end{equation} We also discussed a little bit about the fact that there can be more than one base for the states and that we can use Eq. (6.1) to convert from one base to another. Suppose, for example, that we have the amplitudes $\braket{iS}{\psi}$ to find the state $\psi$ in every one of the base states $i$ of a base system $S$, but that we then decide that we would prefer to describe the state in terms of another set of base states, say the states $j$ belonging to the base $T$. In the general formula, Eq. (6.1), we could substitute $jT$ for $\chi$ and obtain this formula: \begin{equation} \label{Eq:III:6:4} \braket{jT}{\psi}=\sum_i\braket{jT}{iS}\braket{iS}{\psi}. \end{equation} The amplitudes for the state $(\psi)$ to be in the base states $(jT)$ are related to the amplitudes to be in the base states $(iS)$ by the set of coefficients $\braket{jT}{iS}$. If there are $N$ base states, there are $N^2$ such coefficients. Such a set of coefficients is often called the “transformation matrix to go from the $S$-representation to the $T$-representation.” This looks rather formidable mathematically, but with a little renaming we can see that it is really not so bad. If we call $C_i$ the amplitude that the state $\psi$ is in the base state $iS$—that is, $C_i=\braket{iS}{\psi}$—and call $C_j'$ the corresponding amplitudes for the base system $T$—that is, $C_j'=\braket{jT}{\psi}$, then Eq. (6.4) can be written as \begin{equation} \label{Eq:III:6:5} C_j'=\sum_iR_{ji}C_i, \end{equation} where $R_{ji}$ means the same thing as $\braket{jT}{iS}$. Each amplitude $C_j'$ is equal to a sum over all $i$ of one of the coefficients $R_{ji}$ times each amplitude $C_i$. It has the same form as the transformation of a vector from one coordinate system to another. In order to avoid being too abstract for too long, we have given you some examples of these coefficients for the spin-one case, so you can see how to use them in practice. On the other hand, there is a very beautiful thing in quantum mechanics—that from the sheer fact that there are three states and from the symmetry properties of space under rotations, these coefficients can be found purely by abstract reasoning. Showing you such arguments at this early stage has a disadvantage in that you are immersed in another set of abstractions before we get “down to earth.” However, the thing is so beautiful that we are going to do it anyway. We will show you in this chapter how the transformation coefficients can be derived for spin one-half particles. We pick this case, rather than spin one, because it is somewhat easier. Our problem is to determine the coefficients $R_{ji}$ for a particle—an atomic system—which is split into two beams in a Stern-Gerlach apparatus. We are going to derive all the coefficients for the transformation from one representation to another by pure reasoning—plus a few assumptions. Some assumptions are always necessary in order to use “pure” reasoning! Although the arguments will be abstract and somewhat involved, the result we get will be relatively simple to state and easy to understand—and the result is the most important thing. You may, if you wish, consider this as a sort of cultural excursion. We have, in fact, arranged that all the essential results derived here are also derived in some other way when they are needed in later chapters. So you need have no fear of losing the thread of our study of quantum mechanics if you omit this chapter entirely, or study it at some later time. The excursion is “cultural” in the sense that it is intended to show that the principles of quantum mechanics are not only interesting, but are so deep that by adding only a few extra hypotheses about the structure of space, we can deduce a great many properties of physical systems. Also, it is important that we know where the different consequences of quantum mechanics come from, because so long as our laws of physics are incomplete—as we know they are—it is interesting to find out whether the places where our theories fail to agree with experiment is where our logic is the best or where our logic is the worst. Until now, it appears that where our logic is the most abstract it always gives correct results—it agrees with experiment. Only when we try to make specific models of the internal machinery of the fundamental particles and their interactions are we unable to find a theory that agrees with experiment. The theory then that we are about to describe agrees with experiment wherever it has been tested—for the strange particles as well as for electrons, protons, and so on. One remark on an annoying, but interesting, point before we proceed: It is not possible to determine the coefficients $R_{ji}$ uniquely, because there is always some arbitrariness in the probability amplitudes. If you have a set of amplitudes of any kind, say the amplitudes to arrive at some place by a whole lot of different routes, and if you multiply every single amplitude by the same phase factor—say by $e^{i\delta}$—you have another set that is just as good. So, it is always possible to make an arbitrary change in phase of all the amplitudes in any given problem if you want to. Suppose you calculate some probability by writing a sum of several amplitudes, say $(A+B+C+\dotsb)$ and taking the absolute square. Then somebody else calculates the same thing by using the sum of the amplitudes $(A'+B'+C'+\dotsb)$ and taking the absolute square. If all the $A'$, $B'$, $C'$, etc., are equal to the $A$, $B$, $C$, etc., except for a factor $e^{i\delta}$, all probabilities obtained by taking the absolute squares will be exactly the same, since $(A'+B'+C'+\dotsb)$ is then equal to $e^{i\delta}(A+B+C+\dotsb)$. Or suppose, for instance, that we were computing something with Eq. (6.1), but then we suddenly change all of the phases of a certain base system. Every one of the amplitudes $\braket{i}{\psi}$ would be multiplied by the same factor $e^{i\delta}$. Similarly, the amplitudes $\braket{i}{\chi}$ would also be changed by $e^{i\delta}$, but the amplitudes $\braket{\chi}{i}$ are the complex conjugates of the amplitudes $\braket{i}{\chi}$; therefore, the former gets changed by the factor $e^{-i\delta}$. The plus and minus $i\delta$’s in the exponents cancel out, and we would have the same expression we had before. So it is a general rule that if we change all the amplitudes with respect to a given base system by the same phase—or even if we just change all the amplitudes in any problem by the same phase—it makes no difference. There is, therefore, some freedom to choose the phases in our transformation matrix. Every now and then we will make such an arbitrary choice—usually following the conventions that are in general use. |
|
3 | 6 | Spin One-Half | 2 | Transforming to a rotated coordinate system | We consider again the “improved” Stern-Gerlach apparatus described in the last chapter. A beam of spin one-half particles, entering at the left, would, in general, be split into two beams, as shown schematically in Fig. 6-1. (There were three beams for spin one.) As before, the beams are put back together again unless one or the other of them is blocked off by a “stop” which intercepts the beam at its half-way point. In the figure we show an arrow which points in the direction of the increase of the magnitude of the field—say toward the magnet pole with the sharp edges. This arrow we take to represent the “up” axis of any particular apparatus. It is fixed relative to the apparatus and will allow us to indicate the relative orientations when we use several apparatuses together. We also assume that the direction of the magnetic field in each magnet is always the same with respect to the arrow. We will say that those atoms which go in the “upper” beam are in the $(+)$ state with respect to that apparatus and that those in the “lower” beam are in the $(-)$ state. (There is no “zero” state for spin one-half particles.) Now suppose we put two of our modified Stern-Gerlach apparatuses in sequence, as shown in Fig. 6-2(a). The first one, which we call $S$, can be used to prepare a pure $(+S)$ or a pure $(-S)$ state by blocking one beam or the other. [As shown it prepares a pure $(+S)$ state.] For each condition, there is some amplitude for a particle that comes out of $S$ to be in either the $(+T)$ or the $(-T)$ beam of the second apparatus. There are, in fact, just four amplitudes: the amplitude to go from $(+S)$ to $(+T)$, from $(+S)$ to $(-T)$, from $(-S)$ to $(+T)$, from $(-S)$ to $(-T)$. These amplitudes are just the four coefficients of the transformation matrix $R_{ji}$ to go from the $S$-representation to the $T$-representation. We can consider that the first apparatus “prepares” a particular state in one representation and that the second apparatus “analyzes” that state in terms of the second representation. The kind of question we want to answer, then, is this: If an atom has been prepared in a given condition—say the $(+S)$ state—by blocking one of the beams in the apparatus $S$, what is the chance that it will get through the second apparatus $T$ if this is set for, say, the $(-T)$ state. The result will depend, of course, on the angles between the two systems $S$ and $T$. We should explain why it is that we could have any hope of finding the coefficients $R_{ji}$ by deduction. You know that it is almost impossible to believe that if a particle has its spin lined up in the $+z$-direction, that there is some chance of finding the same particle with its spin pointing in the $+x$-direction—or in any other direction at all. In fact, it is almost impossible, but not quite. It is so nearly impossible that there is only one way it can be done, and that is the reason we can find out what that unique way is. The first kind of argument we can make is this. Suppose we have a setup like the one in Fig. 6-2(a), in which we have the two apparatuses $S$ and $T$, with $T$ cocked at the angle $\alpha$ with respect to $S$, and we let only the $(+)$ beam through $S$ and the $(-)$ beam through $T$. We would observe a certain number for the probability that the particles coming out of $S$ get through $T$. Now suppose we make another measurement with the apparatus of Fig. 6-2(b). The relative orientation of $S$ and $T$ is the same, but the whole system sits at a different angle in space. We want to assume that both of these experiments give the same number for the chance that a particle in a pure state with respect to $S$ will get into some particular state with respect to $T$. We are assuming, in other words, that the result of any experiment of this type is the same—that the physics is the same—no matter how the whole apparatus is oriented in space. (You say, “That’s obvious.” But it is an assumption, and it is “right” only if it is actually what happens.) That means that the coefficients $R_{ji}$ depend only on the relation in space of $S$ and $T$, and not on the absolute situation of $S$ and $T$. To say this in another way, $R_{ji}$ depends only on the rotation which carries $S$ to $T$, for evidently what is the same in Fig. 6-2(a) and Fig. 6-2(b) is the three-dimensional rotation which would carry apparatus $S$ into the orientation of apparatus $T$. When the transformation matrix $R_{ji}$ depends only on a rotation, as it does here, it is called a rotation matrix. For our next step we will need one more piece of information. Suppose we add a third apparatus which we can call $U$, which follows $T$ at some arbitrary angle, as in Fig. 6-3(a). (It’s beginning to look horrible, but that’s the fun of abstract thinking—you can make the most weird experiments just by drawing lines!) Now what is the $S\to T\to U$ transformation? What we really want to ask for is the amplitude to go from some state with respect to $S$ to some other state with respect to $U$, when we know the transformation from $S$ to $T$ and from $T$ to $U$. We are then asking about an experiment in which both channels of $T$ are open. We can get the answer by applying Eq. (6.5) twice in succession. For going from the $S$-representation to the $T$-representation, we have \begin{equation} \label{Eq:III:6:6} C_j'=\sum_iR_{ji}^{TS}C_i, \end{equation} where we put the superscripts $TS$ on the $R$, so that we can distinguish it from the coefficients $R^{UT}$ we will have for going from $T$ to $U$. Assuming the amplitudes to be in the base states of the $U$-representation $C_k''$, we can relate them to the $T$-amplitudes by using Eq. (6.5) once more; we get \begin{equation} \label{Eq:III:6:7} C_k''=\sum_jR_{kj}^{UT}C_j'. \end{equation} Now we can combine Eqs. (6.6) and (6.7) to get the transformation to $U$ directly from $S$. Substituting $C_j'$ from Eq. (6.6) in Eq. (6.7), we have \begin{equation} \label{Eq:III:6:8} C_k''=\sum_jR_{kj}^{UT}\sum_iR_{ji}^{TS}C_i. \end{equation} Or, since $i$ does not appear in $R_{kj}^{UT}$, we can put the $i$-summation also in front, and write \begin{equation} \label{Eq:III:6:9} C_k''=\sum_i\sum_jR_{kj}^{UT}R_{ji}^{TS}C_i. \end{equation} This is the formula for a double transformation. Notice, however, that so long as all the beams in $T$ are unblocked, the state coming out of $T$ is the same as the one that went in. We could just as well have made a transformation from the $S$-representation directly to the $U$-representation. It should be the same as putting the $U$ apparatus right after $S$, as in Fig. 6-3(b). In that case, we would have written \begin{equation} \label{Eq:III:6:10} C_k''=\sum_iR_{ki}^{US}C_i, \end{equation} with the coefficients $R_{ki}^{US}$ belonging to this transformation. Now, clearly, Eqs. (6.9) and (6.10) should give the same amplitudes $C_k''$, and this should be true no matter what the original state $\phi$ was which gave us the amplitudes $C_i$. So it must be that \begin{equation} \label{Eq:III:6:11} R_{ki}^{US}=\sum_jR_{kj}^{UT}R_{ji}^{TS}. \end{equation} In other words, for any rotation $S\to U$ of a reference base, which is viewed as a compounding of two successive rotations $S\to T$ and $T\to U$, the rotation matrix $R_{ki}^{US}$ can be obtained from the matrices of the two partial rotations by Eq. (6.11). If you wish, you can find Eq. (6.11) directly from Eq. (6.1), for it is only a different notation for $\braket{kU}{iS}=\sum_j\braket{kU}{jT}\braket{jT}{iS}$. To be thorough, we should add the following parenthetical remarks. They are not terribly important, however, so you can skip to the next section if you want. What we have said is not quite right. We cannot really say that Eq. (6.9) and Eq. (6.10) must give exactly the same amplitudes. Only the physics should be the same; all the amplitudes could be different by some common phase factor like $e^{i\delta}$ without changing the result of any calculation about the real world. So, instead of Eq. (6.11), all we can say, really, is that \begin{equation} \label{Eq:III:6:12} e^{i\delta}R_{ki}^{US}=\sum_jR_{kj}^{UT}R_{ji}^{TS}, \end{equation} where $\delta$ is some real constant. What this extra factor of $e^{i\delta}$ means, of course, is that the amplitudes we get if we use the matrix $R^{US}$ might all differ by the same phase $(e^{-i\delta})$ from the amplitude we would get using the two rotations $R^{UT}$ and $R^{TS}$. We know that it doesn’t matter if all amplitudes are changed by the same phase, so we could just ignore this phase factor if we wanted to. It turns out, however, that if we define all of our rotation matrices in a particular way, this extra phase factor will never appear—the $\delta$ in Eq. (6.12) will always be zero. Although it is not important for the rest of our arguments, we can give a quick proof by using a mathematical theorem about determinants. [If you don’t yet know much about determinants, don’t worry about the proof and just skip to the definition of Eq. (6.15).] First, we should say that Eq. (6.11) is the mathematical definition of a “product” of two matrices. (It is just convenient to be able to say: “$R^{US}$ is the product of $R^{UT}$ and $R^{TS}$.” ) Second, there is a theorem of mathematics—which you can easily prove for the two-by-two matrices we have here—which says that the determinant of a “product” of two matrices is the product of their determinants. Applying this theorem to Eq. (6.12), we get \begin{equation} \label{Eq:III:6:13} e^{i2\delta}\,(\Det R^{US})=(\Det R^{UT})\cdot(\Det R^{TS}). \end{equation} (We leave off the subscripts, because they don’t tell us anything useful.) Yes, the $2\delta$ is right. Remember that we are dealing with two-by-two matrices; every term in the matrix $R_{ki}^{US}$ is multiplied by $e^{i\delta}$, so each product in the determinant—which has two factors—gets multiplied by $e^{i2\delta}$. Now let’s take the square root of Eq. (6.13) and divide it into Eq. (6.12); we get \begin{equation} \label{Eq:III:6:14} \frac{R_{ki}^{US}}{\sqrt{\Det R^{US}}}=\sum_j \frac{R_{kj}^{UT}}{\sqrt{\Det R^{UT}}}\, \frac{R_{ji}^{TS}}{\sqrt{\Det R^{TS}}}. \end{equation} The extra phase factor has disappeared. Now it turns out that if we want all of our amplitudes in any given representation to be normalized (which means, you remember, that $\sum_i\braket{\phi}{i}\braket{i}{\phi}=1$), the rotation matrices will all have determinants that are pure imaginary exponentials, like $e^{i\alpha}$. (We won’t prove it; you will see that it always comes out that way.) So we can, if we wish, choose to make all our rotation matrices $R$ have a unique phase by making $\Det R=1$. It is done like this. Suppose we find a rotation matrix $R$ in some arbitrary way. We make it a rule to “convert” it to “standard form” by defining \begin{equation} \label{Eq:III:6:15} R_{\text{standard}}=\frac{R}{\sqrt{\Det R}}. \end{equation} We can do this because we are just multiplying each term of $R$ by the same phase factor, to get the phases we want. In what follows, we will always assume that our matrices have been put in the “standard form” ; then we can use Eq. (6.11) without having any extra phase factors. |
|
3 | 6 | Spin One-Half | 3 | Rotations about the $\boldsymbol{z}$-axis | We are now ready to find the transformation matrix $R_{ji}$ between two different representations. With our rule for compounding rotations and our assumption that space has no preferred direction, we have the keys we need for finding the matrix of any arbitrary rotation. There is only one solution. We begin with the transformation which corresponds to a rotation about the $z$-axis. Suppose we have two apparatuses $S$ and $T$ placed in series along a straight line with their axes parallel and pointing out of the page, as shown in Fig. 6-4(a). We take our “$z$-axis” in this direction. Surely, if the beam goes “up” (toward $+z$) in the $S$ apparatus, it will do the same in the $T$ apparatus. Similarly, if it goes down in $S$, it will go down in $T$. Suppose, however, that the $T$ apparatus were placed at some other angle, but still with its axis parallel to the axis of $S$, as in Fig. 6-4(b). Intuitively, you would say that a $(+)$ beam in $S$ would still go with a $(+)$ beam in $T$, because the fields and field gradients are still in the same physical direction. And that would be quite right. Also, a $(-)$ beam in $S$ would still go into a $(-)$ beam in $T$. The same result would apply for any orientation of $T$ in the $xy$-plane of $S$. What does this tell us about the relation between $C_+'=\braket{+T}{\psi}$, $C_-'=\braket{-T}{\psi}$ and $C_+=\braket{+S}{\psi}$, $C_-=\braket{-S}{\psi}$? You might conclude that any rotation about the $z$-axis of the “frame of reference” for base states leaves the amplitudes to be “up” and “down” the same as before. We could write $C_+'=C_+$ and $C_-'=C_-$—but that is wrong. All we can conclude is that for such rotations the probabilities to be in the “up” beam are the same for the $S$ and $T$ apparatuses. That is, \begin{equation*} \abs{C_+'}=\abs{C_+}\quad \text{and}\quad \abs{C_-'}=\abs{C_-}. \end{equation*} We cannot say that the phases of the amplitudes referred to the $T$ apparatus may not be different for the two different orientations in (a) and (b) of Fig. 6-4. The two apparatuses in (a) and (b) of Fig. 6-4 are, in fact, different, as we can see in the following way. Suppose that we put an apparatus in front of $S$ which produces a pure $(+x)$ state. (The $x$-axis points toward the bottom of the figure.) Such particles would be split into $(+z)$ and $(-z)$ beams in $S$, but the two beams would be recombined to give a $(+x)$ state again at $P_1$—the exit of $S$. The same thing happens again in $T$. If we follow $T$ by a third apparatus $U$, whose axis is in the $(+x)$ direction and, as shown in Fig. 6-5(a), all the particles would go into the $(+)$ beam of $U$. Now imagine what happens if $T$ and $U$ are swung around together by $90^\circ$ to the positions shown in Fig. 6-5(b). Again, the $T$ apparatus puts out just what it takes in, so the particles that enter $U$ are in a $(+x)$ state with respect to $S$. But $U$ now analyzes for the $(+y)$ state with respect to $S$, which is different. (By symmetry, we would now expect only one-half of the particles to get through.) What could have changed? The apparatuses $T$ and $U$ are still in the same physical relationship to each other. Can the physics be changed just because $T$ and $U$ are in a different orientation? Our original assumption is that it should not. It must be that the amplitudes with respect to $T$ are different in the two cases shown in Fig. 6-5—and, therefore, also in Fig. 6-4. There must be some way for a particle to know that it has turned the corner at $P_1$. How could it tell? Well, all we have decided is that the magnitudes of $C_+'$ and $C_+$ are the same in the two cases, but they could—in fact, must—have different phases. We conclude that $C_+'$ and $C_+$ must be related by \begin{equation*} C_+'=e^{i\lambda}C_+, \end{equation*} and that $C_-'$ and $C_-$ must be related by \begin{equation*} C_-'=e^{i\mu}C_-, \end{equation*} where $\lambda$ and $\mu$ are real numbers which must be related in some way to the angle between $S$ and $T$. The only thing we can say at the moment about $\lambda$ and $\mu$ is that they must not be equal [except for the special case shown in Fig. 6-5(a), when $T$ is in the same orientation as $S$]. We have seen that equal phase changes in all amplitudes have no physical consequence. For the same reason, we can always add the same arbitrary amount to both $\lambda$ and $\mu$ without changing anything. So we are permitted to choose to make $\lambda$ and $\mu$ equal to plus and minus the same number. That is, we can always take \begin{equation*} \lambda'=\lambda-\frac{(\lambda+\mu)}{2},\quad \mu'=\mu-\frac{(\lambda+\mu)}{2}. \end{equation*} Then \begin{equation*} \lambda'=\frac{\lambda}{2}-\frac{\mu}{2}=-\mu'. \end{equation*} So we adopt the convention2 that $\mu=-\lambda$. We have then the general rule that for a rotation of the reference apparatus by some angle about the $z$-axis, the transformation is \begin{equation} \label{Eq:III:6:16} C_+'=e^{+i\lambda}C_+,\quad C_-'=e^{-i\lambda}C_-. \end{equation} The absolute values are the same, only the phases are different. These phase factors are responsible for the different results in the two experiments of Fig. 6-5. Now we would like to know the law that relates $\lambda$ to the angle between $S$ and $T$. We already know the answer for one case. If the angle is zero, $\lambda$ is zero. Now we will assume that the phase shift $\lambda$ is a continuous function of angle $\phi$ between $S$ and $T$ (see Fig. 6-4) as $\phi$ goes to zero—as only seems reasonable. In other words, if we rotate $T$ from the straight line through $S$ by the small angle $\epsilon$, the $\lambda$ is also a small quantity, say $m\epsilon$, where $m$ is some number. We write it this way because we can show that $\lambda$ must be proportional to $\epsilon$. Suppose we were to put after $T$ another apparatus $T'$ which makes the angle $\epsilon$ with $T$, and, therefore, the angle $2\epsilon$ with $S$. Then, with respect to $T$, we have \begin{equation*} C_+'=e^{i\lambda}C_+, \end{equation*} and with respect to $T'$, we have \begin{equation*} C_+''=e^{i\lambda}C_+'=e^{i2\lambda}C_+. \end{equation*} But we know that we should get the same result if we put $T'$ right after $S$. Thus, when the angle is doubled, the phase is doubled. We can evidently extend the argument and build up any rotation at all by a sequence of infinitesimal rotations. We conclude that for any angle $\phi$, $\lambda$ is proportional to the angle. We can, therefore, write $\lambda=m\phi$. The general result we get, then, is that for $T$ rotated about the $z$-axis by the angle $\phi$ with respect to $S$ \begin{equation} \label{Eq:III:6:17} C_+'=e^{im\phi}C_+,\quad C_-'=e^{-im\phi}C_-. \end{equation} For the angle $\phi$, and for all rotations we speak of in the future, we adopt the standard convention that a positive rotation is a right-handed rotation about the positive direction of the reference axis. A positive $\phi$ has the sense of rotation of a right-handed screw advancing in the positive $z$-direction. Now we have to find what $m$ must be. First, we might try this argument: Suppose $T$ is rotated by $360^\circ$; then, clearly, it is right back at zero degrees, and we should have $C_+'=C_+$ and $C_-'=C_-$, or, what is the same thing, $e^{im2\pi}=1$. We get $m=1$. This argument is wrong! To see that it is, consider that $T$ is rotated by $180^\circ$. If $m$ were equal to $1$, we would have $C_+'=$ $e^{i\pi}C_+=$ $-C_+$ and $C_-'=$ $e^{-i\pi}C_-=$ $-C_-$. However, this is just the original state all over again. Both amplitudes are just multiplied by $-1$ which gives back the original physical system. (It is again a case of a common phase change.) This means that if the angle between $T$ and $S$ in Fig. 6-5(b) is increased to $180^\circ$, the system (with respect to $T$) would be indistinguishable from the zero-degree situation, and the particles would again go through the $(+)$ state of the $U$ apparatus. At $180^\circ$, though, the $(+)$ state of the $U$ apparatus is the $(-x)$ state of the original $S$ apparatus. So a $(+x)$ state would become a $(-x)$ state. But we have done nothing to change the original state; the answer is wrong. We cannot have $m=1$. We must have the situation that a rotation by $360^\circ$ and no smaller angle reproduces the same physical state. This will happen if $m=\tfrac{1}{2}$. Then, and only then, will the first angle that reproduces the same physical state be $\phi=360^\circ$.3 It gives \begin{equation} \left. \begin{aligned} C_+'&=-C_+\\ \\ C_-'&=-C_- \end{aligned} \right\} \text{$360^\circ$ about $z$-axis}. \label{Eq:III:6:18} \end{equation} It is very curious to say that if you turn the apparatus $360^\circ$ you get new amplitudes. They aren’t really new, though, because the common change of sign doesn’t give any different physics. If someone else had decided to change all the signs of the amplitudes because he thought he had turned $360^\circ$, that’s all right; he gets the same physics.4 So our final answer is that if we know the amplitudes $C_+$ and $C_-$ for spin one-half particles with respect to a reference frame $S$, and we then use a base system referred to $T$ which is obtained from $S$ by a rotation of $\phi$ around the $z$-axis, the new amplitudes are given in terms of the old by \begin{equation} \left. \begin{aligned} C_+'=e^{i\phi/2}C_+&\\ \\ C_-'=e^{-i\phi/2}C_-& \end{aligned} \right\} \text{$\phi$ about $z$}. \label{Eq:III:6:19} \end{equation} |
|
3 | 6 | Spin One-Half | 4 | Rotations of 180$^{\boldsymbol{\circ}}$ and
90$^{\boldsymbol{\circ}}$ about $\boldsymbol{y}$ | Next, we will try to guess the transformation for a rotation of $T$ with respect to $S$ of $180^\circ$ around an axis perpendicular to the $z$-axis—say, about the $y$-axis. (We have defined the coordinate axes in Fig. 6-1.) In other words, we start with two identical Stern-Gerlach equipments, with the second one, $T$, turned “upside down” with respect to the first one, $S$, as in Fig. 6-6. Now if we think of our particles as little magnetic dipoles, a particle that is in the $(+S)$ state—so that it goes on the “upper” path in the first apparatus—will also take the “upper” path in the second, so that it will be in the minus state with respect to $T$. (In the inverted $T$ apparatus, both the gradients and the field direction are reversed; for a particle with its magnetic moment in a given direction, the force is unchanged.) Anyway, what is “up” with respect to $S$ will be “down” with respect to $T$. For these relative positions of $S$ and $T$, then, we know that the transformation must give \begin{equation*} \abs{C_+'}=\abs{C_-},\quad \abs{C_-'}=\abs{C_+}. \end{equation*} As before, we cannot rule out some additional phase factors; we could have (for $180^\circ$ about the $y$-axis) \begin{equation} \label{Eq:III:6:20} C_+'=e^{i\beta}C_-\quad \text{and}\quad C_-'=e^{i\gamma}C_+, \end{equation} where $\beta$ and $\gamma$ are still to be determined. What about a rotation of $360^\circ$ about the $y$-axis? Well, we already know the answer for a rotation of $360^\circ$ about the $z$-axis—the amplitude to be in any state changes sign. A rotation of $360^\circ$ around any axis always brings us back to the original position. It must be that for any $360^\circ$ rotation, the result is the same as a $360^\circ$ rotation about the $z$-axis—all amplitudes simply change sign. Now suppose we imagine two successive rotations of $180^\circ$ about $y$—using Eq. (6.20)—we should get the result of Eq. (6.18). In other words, \begin{equation*} C_+''=e^{i\beta}C_-'=e^{i\beta}e^{i\gamma}C_+=-C_+ \end{equation*} and \begin{equation} \label{Eq:III:6:21} C_-''=e^{i\gamma}C_+'=e^{i\gamma}e^{i\beta}C_-=-C_-. \end{equation} This means that \begin{equation*} e^{i\beta}e^{i\gamma}=-1\quad \text{or}\quad e^{i\gamma}=-e^{-i\beta}. \end{equation*} So the transformation for a rotation of $180^\circ$ about the $y$-axis can be written \begin{equation} \label{Eq:III:6:22} C_+'=e^{i\beta}C_-,\quad C_-'=-e^{-i\beta}C_+. \end{equation} The arguments we have just used would apply equally well to a rotation of $180^\circ$ about any axis in the $xy$-plane, although different axes can, of course, give different numbers for $\beta$. However, that is the only way they can differ. Now there is a certain amount of arbitrariness in the number $\beta$, but once it is specified for one axis of rotation in the $xy$-plane it is determined for any other axis. It is conventional to choose to set $\beta=0$ for a $180^\circ$ rotation about the $y$-axis. To show that we have this choice, suppose we imagine that $\beta$ was not equal to zero for a rotation about the $y$-axis; then we can show that there is some other axis in the $xy$-plane, for which the corresponding phase factor will be zero. Let’s find the phase factor $\beta_A$ for an axis $A$ that makes the angle $\alpha$ with the $y$-axis, as shown in Fig. 6-7(a). (For clarity, the figure is drawn with $\alpha$ equal to a negative number, but that doesn’t matter.) Now if we take a $T$ apparatus which is initially lined up with the $S$ apparatus and is then rotated $180^\circ$ about the axis $A$, its axes—which we will call $x''$, $y''$, and $z''$—will be as shown in Fig. 6-7(a). The amplitudes with respect to $T$ will then be \begin{equation} \label{Eq:III:6:23} C_+''=e^{i\beta_A}C_-,\quad C_-''=-e^{-i\beta_A}C_+. \end{equation} We can now think of getting to the same orientation by the two successive rotations shown in (b) and (c) of the figure. First, we imagine an apparatus $U$ which is rotated with respect to $S$ by $180^\circ$ about the $y$-axis. The axes $x'$, $y'$, and $z'$ of $U$ will be as shown in Fig. 6-7(b), and the amplitudes with respect to $U$ are given by (6.22). Now notice that we can go from $U$ to $T$ by a rotation about the “$z$-axis” of $U$, namely about $z'$, as shown in Fig. 6-7(c). From the figure you can see that the angle required is two times the angle $\alpha$ but in the opposite direction (with respect to $z'$). Using the transformation of (6.19) with $\phi=-2\alpha$, we get \begin{equation} \label{Eq:III:6:24} C_+''=e^{-i\alpha}C_+',\quad C_-''=e^{+i\alpha}C_-'. \end{equation} Combining Eqs. (6.24) and (6.22), we get that \begin{equation} \label{Eq:III:6:25} C_+''=e^{i(\beta-\alpha)}C_-,\quad C_-''=-e^{-i(\beta-\alpha)}C_+. \end{equation} These amplitudes must, of course, be the same as we got in (6.23). So $\beta_A$ must be related to $\alpha$ and $\beta$ by \begin{equation} \label{Eq:III:6:26} \beta_A=\beta-\alpha. \end{equation} This means that if the angle $\alpha$ between the $A$-axis and the $y$-axis (of $S$) is equal to $\beta$, the transformation for a rotation of $180^\circ$ about $A$ will have $\beta_A=0$. Now so long as some axis perpendicular to the $z$-axis is going to have $\beta=0$, we may as well take it to be the $y$-axis. It is purely a matter of convention, and we adopt the one in general use. Our result: For a rotation of $180^\circ$ about the $y$-axis, we have \begin{equation} \left. \begin{aligned} C_+'&=C_-\\ \\ C_-'&=-C_+ \end{aligned} \right\} \text{$180^\circ$ about $y$}. \label{Eq:III:6:27} \end{equation} While we are thinking about the $y$-axis, let’s next ask for the transformation matrix for a rotation of $90^\circ$ about $y$. We can find it because we know that two successive $90^\circ$ rotations about the same axis must equal one $180^\circ$ rotation. We start by writing the transformation for $90^\circ$ in the most general form: \begin{equation} \label{Eq:III:6:28} C_+'=aC_++bC_-,\quad C_-'=cC_++dC_-. \end{equation} A second rotation of $90^\circ$ about the same axis would have the same coefficients: \begin{equation} \label{Eq:III:6:29} C_+''=aC_+'+bC_-',\quad C_-''=cC_+'+dC_-'. \end{equation} Combining Eqs. (6.28) and (6.29), we have \begin{equation} \begin{alignedat}{2} C_+''&=a(aC_++bC_-)&+\,b(cC_++dC_-),\\[1ex] C_-''&=c(aC_++bC_-)&+\,d(cC_++dC_-). \end{alignedat} \label{Eq:III:6:30} \end{equation} However, from (6.27) we know that \begin{equation*} C_+''=C_-,\quad C_-''=-C_+, \end{equation*} so that we must have that \begin{equation} \begin{aligned} ab+bd&=1,\\ a^2+bc&=0,\\ ac+cd&=-1,\\ bc+d^2&=0. \end{aligned} \label{Eq:III:6:31} \end{equation} These four equations are enough to determine all our unknowns: $a$, $b$, $c$, and $d$. It is not hard to do. Look at the second and fourth equations. Deduce that $a^2=d^2$, which means that $a=d$ or else that $a=-d$. But $a=-d$ is out, because then the first equation wouldn’t be right. So $d=a$. Using this, we have immediately that $b=1/2a$ and that $c=-1/2a$. Now we have everything in terms of $a$. Putting, say, the second equation all in terms of $a$, we have \begin{equation*} a^2-\frac{1}{4a^2}=0\quad \text{or}\quad a^4=\frac{1}{4}. \end{equation*} This equation has four different solutions, but only two of them give the standard value for the determinant. We might as well take $a=1/\sqrt{2}$; then5 \begin{alignat*}{2} a&=1/\sqrt{2},&\quad b&=1/\sqrt{2},\\\\ c&=-1/\sqrt{2},&\quad d&=1/\sqrt{2}. \end{alignat*} In other words, for two apparatuses $S$ and $T$, with $T$ rotated with respect to $S$ by $90^\circ$ about the $y$-axis, the transformation is \begin{equation} \left. \begin{aligned} C_+'&=\frac{1}{\sqrt{2}}\,(C_++C_-)\\ \\ C_-'&=\frac{1}{\sqrt{2}}\,(-C_++C_-) \end{aligned} \right\} \text{$90^\circ$ about $y$}. \label{Eq:III:6:32} \end{equation} We can, of course, solve these equations for $C_+$ and $C_-$, which will give us the transformation for a rotation of minus $90^\circ$ about $y$. Changing the primes around, we would conclude that \begin{equation} \left. \begin{aligned} C_+'&=\frac{1}{\sqrt{2}}\,(C_+-C_-)\\ \\ C_-'&=\frac{1}{\sqrt{2}}\,(C_++C_-) \end{aligned} \right\} \text{$-90^\circ$ about $y$}. \label{Eq:III:6:33} \end{equation} |
|
3 | 6 | Spin One-Half | 5 | Rotations about $\boldsymbol{x}$ | You may be thinking: “This is getting ridiculous. What are they going to do next, $47^\circ$ around $y$, then $33^\circ$ about $x$, and so on, forever?” No, we are almost finished. With just two of the transformations we have—$90^\circ$ about $y$, and an arbitrary angle about $z$ (which we did first if you remember)—we can generate any rotation at all. As an illustration, suppose that we want the angle $\alpha$ around $x$. We know how to deal with the angle $\alpha$ around $z$, but now we want it around $x$. How do we get it? First, we turn the axis $z$ down onto $x$—which is a rotation of $+90^\circ$ about $y$, as shown in Fig. 6-8. Then we turn through the angle $\alpha$ around $z'$. Then we rotate $-90^\circ$ about $y''$. The net result of the three rotations is the same as turning around $x$ by the angle $\alpha$. It is a property of space. (These facts of the combinations of rotations, and what they produce, are hard to grasp intuitively. It is rather strange, because we live in three dimensions, but it is hard for us to appreciate what happens if we turn this way and then that way. Perhaps, if we were fish or birds and had a real appreciation of what happens when we turn somersaults in space, we could more easily appreciate such things.) Anyway, let’s work out the transformation for a rotation by $\alpha$ around the $x$-axis by using what we know. From the first rotation by $+90^\circ$ around $y$ the amplitudes go according to Eq. (6.32). Calling the rotated axes $x'$, $y'$, and $z'$, the next rotation by the angle $\alpha$ around $z'$ takes us to a frame $x''$, $y''$, $z''$, for which \begin{equation*} C_+''=e^{i\alpha/2}C_+',\quad C_-''=e^{-i\alpha/2}C_-'. \end{equation*} The last rotation of $-90^\circ$ about $y''$ takes us to $x'''$, $y'''$, $z'''$; by (6.33), \begin{equation*} C_+'''=\frac{1}{\sqrt{2}}\,(C_+''-C_-''),\quad C_-'''=\frac{1}{\sqrt{2}}\,(C_+''+C_-''). \end{equation*} Combining these last two transformations, we get \begin{align*} C_+'''&=\frac{1}{\sqrt{2}}\, (e^{+i\alpha/2}C_+'-e^{-i\alpha/2}C_-'),\\ \\ C_-'''&=\frac{1}{\sqrt{2}}\, (e^{+i\alpha/2}C_+'+e^{-i\alpha/2}C_-'). \end{align*} Using Eqs. (6.32) for $C_+'$ and $C_-'$, we get the complete transformation: \begin{align*} C_+'''&=\tfrac{1}{2} \{e^{+i\alpha/2}(C_++C_-)-e^{-i\alpha/2}(-C_++C_-)\},\\ \\ C_-'''&=\tfrac{1}{2} \{e^{+i\alpha/2}(C_++C_-)+e^{-i\alpha/2}(-C_++C_-)\}. \end{align*} We can put these formulas in a simpler form by remembering that \begin{equation*} e^{i\theta}+e^{-i\theta}=2\cos\theta,\quad\text{and}\quad e^{i\theta}-e^{-i\theta}=2i\sin\theta. \end{equation*} We get \begin{equation} \left. \begin{alignedat}{3} C_+'''&=\phantom{i}\biggl(\!&\cos\frac{\alpha}{2}&\biggr)C_+\!+ i\biggl(\!&\sin\frac{\alpha}{2}&\biggr)C_-\\ \\[1ex] C_-'''&=i\biggl(\!&\sin\frac{\alpha}{2}&\biggr)C_+\!+ \phantom{i}\biggl(\!&\cos\frac{\alpha}{2}&\biggr)C_- \end{alignedat} \!\! % ebook remove \right\} \text{$\alpha$ about $x$}. \label{Eq:III:6:34} \end{equation} Here is our transformation for a rotation about the $x$-axis by any angle $\alpha$. It is only a little more complicated than the others. |
|
3 | 6 | Spin One-Half | 6 | Arbitrary rotations | Now we can see how to do any angle at all. First, notice that any relative orientation of two coordinate frames can be described in terms of three angles, as shown in Fig. 6-9. If we have a set of axes $x'$, $y'$, and $z'$ oriented in any way at all with respect to $x$, $y$, and $z$, we can describe the relationship between the two frames by means of the three Euler angles $\alpha$, $\beta$, and $\gamma$, which define three successive rotations that will bring the $x$, $y$, $z$ frame into the $x'$, $y'$, $z'$ frame. Starting at $x$, $y$, $z$, we rotate our frame through the angle $\beta$ about the $z$-axis, bringing the $x$-axis to the line $x_1$. Then, we rotate by $\alpha$ about this temporary $x$-axis, to bring $z$ down to $z'$. Finally, a rotation about the new $z$-axis (that is, $z'$) by the angle $\gamma$ will bring the $x$-axis into $x'$ and the $y$-axis into $y'$.6 We know the transformations for each of the three rotations—they are given in (6.19) and (6.34). Combining them in the proper order, we get \begin{equation} \begin{alignedat}{3} C_+'&=\phantom{i}&\cos\frac{\alpha}{2}&e^{i(\beta+\gamma)/2}C_+\!+ i&\sin\frac{\alpha}{2}&e^{-i(\beta-\gamma)/2}C_-,\\ \\ C_-'&=i&\sin\frac{\alpha}{2}&e^{i(\beta-\gamma)/2}C_+\!+ \phantom{i}&\cos\frac{\alpha}{2}&e^{-i(\beta+\gamma)/2}C_-. \end{alignedat} \label{Eq:III:6:35} \end{equation} So just starting from some assumptions about the properties of space, we have derived the amplitude transformation for any rotation at all. That means that if we know the amplitudes for any state of a spin one-half particle to go into the two beams of a Stern-Gerlach apparatus $S$, whose axes are $x$, $y$, and $z$, we can calculate what fraction would go into either beam of an apparatus $T$ with the axes $x'$, $y'$, and $z'$. In other words, if we have a state $\psi$ of a spin one-half particle, whose amplitudes are $C_+=\braket{+}{\psi}$ and $C_-=\braket{-}{\psi}$ to be “up” and “down” with respect to the $z$-axis of the $x$, $y$, $z$ frame, we also know the amplitudes $C_+'$ and $C_-'$ to be “up” and “down” with respect to the $z'$-axis of any other frame $x'$, $y'$, $z'$. The four coefficients in Eqs. (6.35) are the terms of the “transformation matrix” with which we can project the amplitudes of a spin one-half particle into any other coordinate system. We will now work out a few examples to show you how it all works. Let’s take the following simple question. We put a spin one-half atom through a Stern-Gerlach apparatus that transmits only the $(+z)$ state. What is the amplitude that it will be in the $(+x)$ state? The $+x$ axis is the same as the $+z'$ axis of a system rotated $90^\circ$ about the $y$-axis. For this problem, then, it is simplest to use Eqs. (6.32)—although you could, of course, use the complete equations of (6.35). Since $C_+=1$ and $C_-=0$, we get $C_+'=1/\sqrt{2}$. The probabilities are the absolute square of these amplitudes; there is a $50$ percent chance that the particle will go through an apparatus that selects the $(+x)$ state. If we had asked about the $(-x)$ state the amplitude would have been $-1/\sqrt{2}$, which also gives a probability $1/2$—as you would expect from the symmetry of space. So if a particle is in the $(+z)$ state, it is equally likely to be in $(+x)$ or $(-x)$, but with opposite phase. There’s no prejudice in $y$ either. A particle in the $(+z)$ state has a $50$–$50$ chance of being in $(+y)$ or in $(-y)$. However, for these (using the formula for rotating $-90^\circ$ about $x$), the amplitudes are $1/\sqrt{2}$ and $-i/\sqrt{2}$. In this case, the two amplitudes have a phase difference of $90^\circ$ instead of $180^\circ$, as they did for the $(+x)$ and $(-x)$. In fact, that’s how the distinction between $x$ and $y$ shows up. As our final example, suppose that we know that a spin one-half particle is in a state $\psi$ such that it is polarized “up” along some axis $A$, defined by the angles $\theta$ and $\phi$ in Fig. 6-10. We want to know the amplitude $C_+$ that the particle is “up” along $z$ and the amplitude $C_-$ that it is “down” along $z$. We can find these amplitudes by imagining that $A$ is the $z$-axis of a system whose $x$-axis lies in some arbitrary direction—say in the plane formed by $A$ and $z$. We can then bring the frame of $A$ into $x$, $y$, $z$ by three rotations. First, we make a rotation by $-\pi/2$ about the axis $A$, which brings the $x$-axis into the line $B$ in the figure. Then we rotate by $\theta$ about line $B$ (the new $x$-axis of frame $A$) to bring $A$ to the $z$-axis. Finally, we rotate by the angle $(\pi/2-\phi)$ about $z$. Remembering that we have only a $(+)$ state with respect to $A$, we get \begin{equation} \label{Eq:III:6:36} C_+=\cos\frac{\theta}{2}\,e^{-i\phi/2},\quad C_-=\sin\frac{\theta}{2}\,e^{+i\phi/2}. \end{equation} We would like, finally, to summarize the results of this chapter in a form that will be useful for our later work. First, we remind you that our primary result in Eqs. (6.35) can be written in another notation. Note that Eqs. (6.35) mean just the same thing as Eq. (6.4). That is, in Eqs. (6.35) the coefficients of $C_+=\braket{+S}{\psi}$ and $C_-=\braket{-S}{\psi}$ are just the amplitudes $\braket{jT}{iS}$ of Eq. (6.4)—the amplitudes that a particle in the $i$-state with respect to $S$ will be in the $j$-state with respect to $T$ (when the orientation of $T$ with respect to $S$ is given in terms of the angles $\alpha$, $\beta$, and $\gamma$). We also called them $R_{ji}^{TS}$ in Eq. (6.6). (We have a plethora of notations!) For example, $R_{-+}^{TS}=\braket{-T}{+S}$ is the coefficient of $C_+$ in the formula for $C_-'$, namely, $i\sin\,(\alpha/2)\,e^{i(\beta-\gamma)/2}$. We can, therefore, make a summary of our results in the form of a table, as we have done in Table 6–1. It will occasionally be handy to have these amplitudes already worked out for some simple special cases. Let’s let $R_z(\phi)$ stand for a rotation by the angle $\phi$ about the $z$-axis. We can also let it stand for the corresponding rotation matrix (omitting the subscripts $i$ and $j$, which are to be implicitly understood). In the same spirit $R_x(\phi)$ and $R_y(\phi)$ will stand for rotations by the angle $\phi$ about the $x$-axis or the $y$-axis. We give in Table 6–2 the matrices—the tables of amplitudes $\braket{jT}{iS}$—which project the amplitudes from the $S$-frame into the $T$-frame, where $T$ is obtained from $S$ by the rotation specified. |
|
3 | 7 | The Dependence of Amplitudes on Time | 1 | Atoms at rest; stationary states | We want now to talk a little bit about the behavior of probability amplitudes in time. We say a “little bit,” because the actual behavior in time necessarily involves the behavior in space as well. Thus, we get immediately into the most complicated possible situation if we are to do it correctly and in detail. We are always in the difficulty that we can either treat something in a logically rigorous but quite abstract way, or we can do something which is not at all rigorous but which gives us some idea of a real situation—postponing until later a more careful treatment. With regard to energy dependence, we are going to take the second course. We will make a number of statements. We will not try to be rigorous—but will just be telling you things that have been found out, to give you some feeling for the behavior of amplitudes as a function of time. As we go along, the precision of the description will increase, so don’t get nervous that we seem to be picking things out of the air. It is, of course, all out of the air—the air of experiment and of the imagination of people. But it would take us too long to go over the historical development, so we have to plunge in somewhere. We could plunge into the abstract and deduce everything—which you would not understand—or we could go through a large number of experiments to justify each statement. We choose to do something in between. An electron alone in empty space can, under certain circumstances, have a certain definite energy. For example, if it is standing still (so it has no translational motion, no momentum, or kinetic energy), it has its rest energy. A more complicated object like an atom can also have a definite energy when standing still, but it could also be internally excited to another energy level. (We will describe later the machinery of this.) We can often think of an atom in an excited state as having a definite energy, but this is really only approximately true. An atom doesn’t stay excited forever because it manages to discharge its energy by its interaction with the electromagnetic field. So there is some amplitude that a new state is generated—with the atom in a lower state, and the electromagnetic field in a higher state, of excitation. The total energy of the system is the same before and after, but the energy of the atom is reduced. So it is not precise to say an excited atom has a definite energy; but it will often be convenient and not too wrong to say that it does. [Incidentally, why does it go one way instead of the other way? Why does an atom radiate light? The answer has to do with entropy. When the energy is in the electromagnetic field, there are so many different ways it can be—so many different places where it can wander—that if we look for the equilibrium condition, we find that in the most probable situation the field is excited with a photon, and the atom is de-excited. It takes a very long time for the photon to come back and find that it can knock the atom back up again. It’s quite analogous to the classical problem: Why does an accelerating charge radiate? It isn’t that it “wants” to lose energy, because, in fact, when it radiates, the energy of the world is the same as it was before. Radiation or absorption goes in the direction of increasing entropy.] Nuclei can also exist in different energy levels, and in an approximation which disregards the electromagnetic effects, we can say that a nucleus in an excited state stays there. Although we know that it doesn’t stay there forever, it is often useful to start out with an approximation which is somewhat idealized and easier to think about. Also it is often a legitimate approximation under certain circumstances. (When we first introduced the classical laws of a falling body, we did not include friction, but there is almost never a case in which there isn’t some friction.) Then there are the subnuclear “strange particles,” which have various masses. But the heavier ones disintegrate into other light particles, so again it is not correct to say that they have a precisely definite energy. That would be true only if they lasted forever. So when we make the approximation that they have a definite energy, we are forgetting the fact that they must blow up. For the moment, then, we will intentionally forget about such processes and learn later how to take them into account. Suppose we have an atom—or an electron, or any particle—which at rest would have a definite energy $E_0$. By the energy $E_0$ we mean the mass of the whole thing times $c^2$. This mass includes any internal energy; so an excited atom has a mass which is different from the mass of the same atom in the ground state. (The ground state means the state of lowest energy.) We will call $E_0$ the “energy at rest.” For an atom at rest, the quantum mechanical amplitude to find an atom at a place is the same everywhere; it does not depend on position. This means, of course, that the probability of finding the atom anywhere is the same. But it means even more. The probability could be independent of position, and still the phase of the amplitude could vary from point to point. But for a particle at rest, the complete amplitude is identical everywhere. It does, however, depend on the time. For a particle in a state of definite energy $E_0$, the amplitude to find the particle at $(x,y,z)$ at the time $t$ is \begin{equation} \label{Eq:III:7:1} ae^{-i(E_0/\hbar)t}, \end{equation} where $a$ is some constant. The amplitude to be at any point in space is the same for all points, but depends on time according to (7.1). We shall simply assume this rule to be true. Of course, we could also write (7.1) as \begin{equation} \label{Eq:III:7:2} ae^{-i\omega t}, \end{equation} with \begin{equation*} \hbar\omega=E_0=Mc^2, \end{equation*} where $M$ is the rest mass of the atomic state, or particle. There are three different ways of specifying the energy: by the frequency of an amplitude, by the energy in the classical sense, or by the inertia. They are all equivalent; they are just different ways of saying the same thing. You may be thinking that it is strange to think of a “particle” which has equal amplitudes to be found throughout all space. After all, we usually imagine a “particle” as a small object located “somewhere.” But don’t forget the uncertainty principle. If a particle has a definite energy, it has also a definite momentum. If the uncertainty in momentum is zero, the uncertainty relation, $\Delta p\,\Delta x=\hbar$, tells us that the uncertainty in the position must be infinite, and that is just what we are saying when we say that there is the same amplitude to find the particle at all points in space. If the internal parts of an atom are in a different state with a different total energy, then the variation of the amplitude with time is different. If you don’t know in which state it is, there will be a certain amplitude to be in one state and a certain amplitude to be in another—and each of these amplitudes will have a different frequency. There will be an interference between these different components—like a beat-note—which can show up as a varying probability. Something will be “going on” inside of the atom—even though it is “at rest” in the sense that its center of mass is not drifting. However, if the atom has one definite energy, the amplitude is given by (7.1), and the absolute square of this amplitude does not depend on time. You see, then, that if a thing has a definite energy and if you ask any probability question about it, the answer is independent of time. Although the amplitudes vary with time, if the energy is definite they vary as an imaginary exponential, and the absolute value doesn’t change. That’s why we often say that an atom in a definite energy level is in a stationary state. If you make any measurements of the things inside, you’ll find that nothing (in probability) will change in time. In order to have the probabilities change in time, we have to have the interference of two amplitudes at two different frequencies, and that means that we cannot know what the energy is. The object will have one amplitude to be in a state of one energy and another amplitude to be in a state of another energy. That’s the quantum mechanical description of something when its behavior depends on time. If we have a “condition” which is a mixture of two different states with different energies, then the amplitude for each of the two states varies with time according to Eq. (7.2), for instance, as \begin{equation} \label{Eq:III:7:3} e^{-i(E_1/\hbar)t}\quad \text{and}\quad e^{-i(E_2/\hbar)t}. \end{equation} And if we have some combination of the two, we will have an interference. But notice that if we added a constant to both energies, it wouldn’t make any difference. If somebody else were to use a different scale of energy in which all the energies were increased (or decreased) by a constant amount—say, by the amount $A$—then the amplitudes in the two states would, from his point of view, be \begin{equation} \label{Eq:III:7:4} e^{-i(E_1+A)t/\hbar}\quad \text{and}\quad e^{-i(E_2+A)t/\hbar}. \end{equation} All of his amplitudes would be multiplied by the same factor $e^{-i(A/\hbar)t}$, and all linear combinations, or interferences, would have the same factor. When we take the absolute squares to find the probabilities, all the answers would be the same. The choice of an origin for our energy scale makes no difference; we can measure energy from any zero we want. For relativistic purposes it is nice to measure the energy so that the rest mass is included, but for many purposes that aren’t relativistic it is often nice to subtract some standard amount from all energies that appear. For instance, in the case of an atom, it is usually convenient to subtract the energy $M_sc^2$, where $M_s$ is the mass of all the separate pieces—the nucleus and the electrons—which is, of course, different from the mass of the atom. For other problems it may be useful to subtract from all energies the amount $M_gc^2$, where $M_g$ is the mass of the whole atom in the ground state; then the energy that appears is just the excitation energy of the atom. So, sometimes we may shift our zero of energy by some very large constant, but it doesn’t make any difference, provided we shift all the energies in a particular calculation by the same constant. So much for a particle standing still. |
|
3 | 7 | The Dependence of Amplitudes on Time | 2 | Uniform motion | If we suppose that the relativity theory is right, a particle at rest in one inertial system can be in uniform motion in another inertial system. In the rest frame of the particle, the probability amplitude is the same for all $x$, $y$, and $z$ but varies with $t$. The magnitude of the amplitude is the same for all $t$, but the phase depends on $t$. We can get a kind of a picture of the behavior of the amplitude if we plot lines of equal phase—say, lines of zero phase—as a function of $x$ and $t$. For a particle at rest, these equal-phase lines are parallel to the $x$-axis and are equally spaced in the $t$-coordinate, as shown by the dashed lines in Fig. 7–1. In a different frame—$x'$, $y'$, $z'$, $t'$—that is moving with respect to the particle in, say, the $x$-direction, the $x'$ and $t'$ coordinates of any particular point in space are related to $x$ and $t$ by the Lorentz transformation. This transformation can be represented graphically by drawing $x'$ and $t'$ axes, as is done in Fig. 7–1. (See Chapter 17, Vol. I, Fig. 17–2) You can see that in the $x'$-$t'$ system, points of equal phase1 have a different spacing along the $t'$-axis, so the frequency of the time variation is different. Also there is a variation of the phase with $x'$, so the probability amplitude must be a function of $x'$. Under a Lorentz transformation for the velocity $v$, say along the negative $x$-direction, the time $t$ is related to the time $t'$ by \begin{equation*} t=\frac{t'-x'v/c^2}{\sqrt{1-v^2/c^2}}, \end{equation*} so our amplitude now varies as \begin{equation*} e^{-(i/\hbar)E_0t}= e^{-(i/\hbar)(E_0t'/\sqrt{1-v^2/c^2}-E_0vx'/c^2\sqrt{1-v^2/c^2})}. \end{equation*} In the prime system it varies in space as well as in time. If we write the amplitude as \begin{equation*} e^{-(i/\hbar)(E_p't'-p'x')}, \end{equation*} we see that $E_p'=E_0/\sqrt{1-v^2/c^2}$ is the energy computed classically for a particle of rest energy $E_0$ travelling at the velocity $v$, and $p'=E_p'v/c^2$ is the corresponding particle momentum. You know that $x_\mu=(ct,x,y,z)$ and $p_\mu=(E/c,p_x,p_y,p_z)$ are four-vectors, and that $p_\mu x_\mu=Et-\FLPp\cdot\FLPx$ is a scalar invariant. In the rest frame of the particle, $p_\mu x_\mu$ is just $Et$; so if we transform to another frame, $Et$ will be replaced by \begin{equation*} E't'-\FLPp'\cdot\FLPx'. \end{equation*} Thus, the probability amplitude of a particle which has the momentum $\FLPp$ will be proportional to \begin{equation} \label{Eq:III:7:5} e^{-(i/\hbar)(E_pt-\FLPp\cdot\FLPx)}, \end{equation} where $E_p$ is the energy of the particle whose momentum is $p$, that is, \begin{equation} \label{Eq:III:7:6} E_p=\sqrt{(pc)^2+E_0^2}, \end{equation} where $E_0$ is, as before, the rest energy. For nonrelativistic problems, we can write \begin{equation} \label{Eq:III:7:7} E_p=M_sc^2+W_p, \end{equation} where $W_p$ is the energy over and above the rest energy $M_sc^2$ of the parts of the atom. In general, $W_p$ would include both the kinetic energy of the atom as well as its binding or excitation energy, which we can call the “internal” energy. We would write \begin{equation} \label{Eq:III:7:8} W_p=W_{\text{int}}+\frac{p^2}{2M}, \end{equation} and the amplitudes would be \begin{equation} \label{Eq:III:7:9} e^{-(i/\hbar)(W_pt-\FLPp\cdot\FLPx)}. \end{equation} Because we will generally be doing nonrelativistic calculations, we will use this form for the probability amplitudes. Note that our relativistic transformation has given us the variation of the amplitude of an atom which moves in space without any additional assumptions. The wave number of the space variations is, from (7.9), \begin{equation} \label{Eq:III:7:10} k=\frac{p}{\hbar}; \end{equation} so the wavelength is \begin{equation} \label{Eq:III:7:11} \lambda=\frac{2\pi}{k}=\frac{h}{p}. \end{equation} This is the same wavelength we have used before for particles with the momentum $p$. This formula was first arrived at by de Broglie in just this way. For a moving particle, the frequency of the amplitude variations is still given by \begin{equation} \label{Eq:III:7:12} \hbar\omega=W_p. \end{equation} The absolute square of (7.9) is just $1$, so for a particle in motion with a definite energy, the probability of finding it is the same everywhere and does not change with time. (It is important to notice that the amplitude is a complex wave. If we used a real sine wave, the square would vary from point to point, which would not be right.) We know, of course, that there are situations in which particles move from place to place so that the probability depends on position and changes with time. How do we describe such situations? We can do that by considering amplitudes which are a superposition of two or more amplitudes for states of definite energy. We have already discussed this situation in Chapter 48 of Vol. I—even for probability amplitudes! We found that the sum of two amplitudes with different wave numbers $k$ (that is, momenta) and frequencies $\omega$ (that is, energies) gives interference humps, or beats, so that the square of the amplitude varies with space and time. We also found that these beats move with the so-called “group velocity” given by \begin{equation*} v_g=\frac{\Delta\omega}{\Delta k}, \end{equation*} where $\Delta k$ and $\Delta\omega$ are the differences between the wave numbers and frequencies for the two waves. For more complicated waves—made up of the sum of many amplitudes all near the same frequency—the group velocity is \begin{equation} \label{Eq:III:7:13} v_g=\ddt{\omega}{k}. \end{equation} Taking $\omega=E_p/\hbar$ and $k=p/\hbar$, we see that \begin{equation} \label{Eq:III:7:14} v_g=\ddt{E_p}{p}. \end{equation} Using Eq. (7.6), we have \begin{equation} \label{Eq:III:7:15} \ddt{E_p}{p}=c^2\,\frac{p}{E_p}. \end{equation} At nonrelativistic speeds $E_p\approx Mc^2$, so \begin{equation} \label{Eq:III:7:16} \ddt{E_p}{p}=\frac{p}{M}, \end{equation} which is just the classical velocity of the particle. Alternatively, if we use the nonrelativistic expressions Eqs. (7.7) and (7.8), we have \begin{equation} \omega=\frac{W_p}{\hbar}\quad \text{and}\quad k=\frac{p}{\hbar},\notag \end{equation} and \begin{equation} \label{Eq:III:7:17} \ddt{\omega}{k}=\ddt{W_p}{p}=\ddt{}{p}\biggl(\frac{p^2}{2M}\biggr)= \frac{p}{M}, \end{equation} which is again the classical velocity. Our result, then, is that if we have several amplitudes for pure energy states of nearly the same energy, their interference gives “lumps” in the probability that move through space with a velocity equal to the velocity of a classical particle of that energy. We should remark, however, that when we say we can add two amplitudes of different wave number together to get a beat-note that will correspond to a moving particle, we have introduced something new—something that we cannot deduce from the theory of relativity. We said what the amplitude did for a particle standing still and then deduced what it would do if the particle were moving. But we cannot deduce from these arguments what would happen when there are two waves moving with different speeds. If we stop one, we cannot stop the other. So we have added tacitly the extra hypothesis that not only is (7.9) a possible solution, but that there can also be solutions with all kinds of $p$’s for the same system, and that the different terms will interfere. |
|
3 | 7 | The Dependence of Amplitudes on Time | 3 | Potential energy; energy conservation | Now we would like to discuss what happens when the energy of a particle can change. We begin by thinking of a particle which moves in a force field described by a potential. We discuss first the effect of a constant potential. Suppose that we have a large metal can which we have raised to some electrostatic potential $\phi$, as in Fig. 7–2. If there are charged objects inside the can, their potential energy will be $q\phi$, which we will call $V$, and will be absolutely independent of position. Then there can be no change in the physics inside, because the constant potential doesn’t make any difference so far as anything going on inside the can is concerned. Now there is no way we can deduce what the answer should be, so we must make a guess. The guess which works is more or less what you might expect: For the energy, we must use the sum of the potential energy $V$ and the energy $E_p$—which is itself the sum of the internal and kinetic energies. The amplitude is proportional to \begin{equation} \label{Eq:III:7:18} e^{-(i/\hbar)[(E_p+V)t-\FLPp\cdot\FLPx]}. \end{equation} The general principle is that the coefficient of $t$, which we may call $\omega$, is always given by the total energy of the system: internal (or “mass”) energy, plus kinetic energy, plus potential energy: \begin{equation} \label{Eq:III:7:19} \hbar\omega=E_p+V. \end{equation} Or, for nonrelativistic situations, \begin{equation} \label{Eq:III:7:20} \hbar\omega=W_{\text{int}}+\frac{p^2}{2M}+V. \end{equation} Now what about physical phenomena inside the box? If there are several different energy states, what will we get? The amplitude for each state has the same additional factor \begin{equation*} e^{-(i/\hbar)Vt} \end{equation*} over what it would have with $V=0$. That is just like a change in the zero of our energy scale. It produces an equal phase change in all amplitudes, but as we have seen before, this doesn’t change any of the probabilities. All the physical phenomena are the same. (We have assumed that we are talking about different states of the same charged object, so that $q\phi$ is the same for all. If an object could change its charge in going from one state to another, we would have quite another result, but conservation of charge prevents this.) So far, our assumption agrees with what we would expect for a change of energy reference level. But if it is really right, it should hold for a potential energy that is not just a constant. In general, $V$ could vary in any arbitrary way with both time and space, and the complete result for the amplitude must be given in terms of a differential equation. We don’t want to get concerned with the general case right now, but only want to get some idea about how some things happen, so we will think only of a potential that is constant in time and varies very slowly in space. Then we can make a comparison between the classical and quantum ideas. Suppose we think of the situation in Fig. 7–3, which has two boxes held at the constant potentials $\phi_1$ and $\phi_2$ and a region in between where we will assume that the potential varies smoothly from one to the other. We imagine that some particle has an amplitude to be found in any one of the regions. We also assume that the momentum is large enough so that in any small region in which there are many wavelengths, the potential is nearly constant. We would then think that in any part of the space the amplitude ought to look like (7.18) with the appropriate $V$ for that part of the space. Let’s think of a special case in which $\phi_1=0$, so that the potential energy there is zero, but in which $q\phi_2$ is negative, so that classically the particle would have more energy in the second box. Classically, it would be going faster in the second box—it would have more energy and, therefore, more momentum. Let’s see how that might come out of quantum mechanics. With our assumption, the amplitude in the first box would be proportional to \begin{equation} \label{Eq:III:7:21} e^{-(i/\hbar)[(W_{\text{int}}+p_1^2/2M+V_1)t-\FLPp_1\cdot\FLPx]}, \end{equation} and the amplitude in the second box would be proportional to \begin{equation} \label{Eq:III:7:22} e^{-(i/\hbar)[(W_{\text{int}}+p_2^2/2M+V_2)t-\FLPp_2\cdot\FLPx]}. \end{equation} (Let’s say that the internal energy is not being changed, but remains the same in both regions.) The question is: How do these two amplitudes match together through the region between the boxes? We are going to suppose that the potentials are all constant in time—so that nothing in the conditions varies. We will then suppose that the variations of the amplitude (that is, its phase) have the same frequency everywhere—because, so to speak, there is nothing in the “medium” that depends on time. If nothing in the space is changing, we can consider that the wave in one region “generates” subsidiary waves all over space which will all oscillate at the same frequency—just as light waves going through materials at rest do not change their frequency. If the frequencies in (7.21) and (7.22) are the same, we must have that \begin{equation} \label{Eq:III:7:23} W_{\text{int}}+\frac{p_1^2}{2M}+V_1= W_{\text{int}}+\frac{p_2^2}{2M}+V_2. \end{equation} Both sides are just the classical total energies, so Eq. (7.23) is a statement of the conservation of energy. In other words, the classical statement of the conservation of energy is equivalent to the quantum mechanical statement that the frequencies for a particle are everywhere the same if the conditions are not changing with time. It all fits with the idea that $\hbar\omega=E$. In the special example that $V_1=0$ and $V_2$ is negative, Eq. (7.23) gives that $p_2$ is greater than $p_1$, so the wavelength of the waves is shorter in region $2$. The surfaces of equal phase are shown by the dashed lines in Fig. 7–3. We have also drawn a graph of the real part of the amplitude, which shows again how the wavelength decreases in going from region $1$ to region $2$. The group velocity of the waves, which is $p/M$, also increases in the way one would expect from the classical energy conservation, since it is just the same as Eq. (7.23). There is an interesting special case where $V_2$ gets so large that $V_2-V_1$ is greater than $p_1^2/2M$. Then $p_2^2$, which is given by \begin{equation} \label{Eq:III:7:24} p_2^2=2M\biggl[\frac{p_1^2}{2M}-V_2+V_1\biggr], \end{equation} is negative. That means that $p_2$ is an imaginary number, say, $ip'$. Classically, we would say that the particle never gets into region $2$—it doesn’t have enough energy to climb the potential hill. Quantum mechanically, however, the amplitude is still given by Eq. (7.22); its space variation still goes as \begin{equation*} e^{(i/\hbar)\FLPp_2\cdot\FLPx}. \end{equation*} But if $p_2$ is imaginary, the space dependence becomes a real exponential. Say that the particle was initially going in the $+x$-direction; then the amplitude would vary as \begin{equation} \label{Eq:III:7:25} e^{-p'x/\hbar}. \end{equation} The amplitude decreases rapidly with increasing $x$. Imagine that the two regions at different potentials were very close together, so that the potential energy changed suddenly from $V_1$ to $V_2$, as shown in Fig. 7–4(a). If we plot the real part of the probability amplitude, we get the dependence shown in part (b) of the figure. The wave in the first region corresponds to a particle trying to get into the second region, but the amplitude there falls off rapidly. There is some chance that it will be observed in the second region—where it could never get classically—but the amplitude is very small except right near the boundary. The situation is very much like what we found for the total internal reflection of light. The light doesn’t normally get out, but we can observe it if we put something within a wavelength or two of the surface. You will remember that if we put a second surface close to the boundary where light was totally reflected, we could get some light transmitted into the second piece of material. The corresponding thing happens to particles in quantum mechanics. If there is a narrow region with a potential $V$, so great that the classical kinetic energy would be negative, the particle would classically never get past. But quantum mechanically, the exponentially decaying amplitude can reach across the region and give a small probability that the particle will be found on the other side where the kinetic energy is again positive. The situation is illustrated in Fig. 7–5. This effect is called the quantum mechanical “penetration of a barrier.” The barrier penetration by a quantum mechanical amplitude gives the explanation—or description—of the $\alpha$-particle decay of a uranium nucleus. The potential energy of an $\alpha$-particle, as a function of the distance from the center, is shown in Fig. 7–6(a). If one tried to shoot an $\alpha$-particle with the energy $E$ into the nucleus, it would feel an electrostatic repulsion from the nuclear charge $z$ and would, classically, get no closer than the distance $r_1$ where its total energy is equal to the potential energy $V$. Closer in, however, the potential energy is much lower because of the strong attraction of the short-range nuclear forces. How is it then that in radioactive decay we find $\alpha$-particles which started out inside the nucleus coming out with the energy $E$? Because they start out with the energy $E$ inside the nucleus and “leak” through the potential barrier. The probability amplitude is roughly as sketched in part (b) of Fig. 7–6, although actually the exponential decay is much larger than shown. It is, in fact, quite remarkable that the mean life of an $\alpha$-particle in the uranium nucleus is as long as $4\tfrac{1}{2}$ billion years, when the natural oscillations inside the nucleus are so extremely rapid—about $10^{22}$ per sec! How can one get a number like $10^9$ years from $10^{-22}$ sec? The answer is that the exponential gives the tremendously small factor of about $e^{-45}$—which gives the very small, though definite, probability of leakage. Once the $\alpha$-particle is in the nucleus, there is almost no amplitude at all for finding it outside; however, if you take many nuclei and wait long enough, you may be lucky and find one that has come out. |
|
3 | 7 | The Dependence of Amplitudes on Time | 4 | Forces; the classical limit | Suppose that we have a particle moving along and passing through a region where there is a potential that varies at right angles to the motion. Classically, we would describe the situation as sketched in Fig. 7–7. If the particle is moving along the $x$-direction and enters a region where there is a potential that varies with $y$, the particle will get a transverse acceleration from the force $F=-\ddpl{V}{y}$. If the force is present only in a limited region of width $w$, the force will act only for the time $w/v$. The particle will be given the transverse momentum \begin{equation*} p_y=F\,\frac{w}{v}. \end{equation*} The angle of deflection $\delta\theta$ is then \begin{equation*} \delta\theta=\frac{p_y}{p}=\frac{Fw}{pv}, \end{equation*} where $p$ is the initial momentum. Using $-\ddpl{V}{y}$ for $F$, we get \begin{equation} \label{Eq:III:7:26} \delta\theta=-\frac{w}{pv}\,\ddp{V}{y}. \end{equation} It is now up to us to see if our idea that the waves go as (7.20) will explain the same result. We look at the same thing quantum mechanically, assuming that everything is on a very large scale compared with a wavelength of our probability amplitudes. In any small region we can say that the amplitude varies as \begin{equation} \label{Eq:III:7:27} e^{-(i/\hbar)[(W+p^2/2M+V)t-\FLPp\cdot\FLPx]}. \end{equation} Can we see that this will also give rise to a deflection of the particle when $V$ has a transverse gradient? We have sketched in Fig. 7–8 what the waves of probability amplitude will look like. We have drawn a set of “wave nodes” which you can think of as surfaces where the phase of the amplitude is zero. In every small region, the wavelength—the distance between successive nodes—is \begin{equation*} \lambda=\frac{h}{p}, \end{equation*} where $p$ is related to $V$ through \begin{equation} \label{Eq:III:7:28} W+\frac{p^2}{2M}+V=\text{const}. \end{equation} In the region where $V$ is larger, $p$ is smaller, and the wavelength is longer. So the angle of the wave nodes gets changed as shown in the figure. To find the change in angle of the wave nodes we notice that for the two paths $a$ and $b$ in Fig. 7–8 there is a difference of potential $\Delta V=(\ddpl{V}{y})D$, so there is a difference $\Delta p$ in the momentum along the two tracks which can be obtained from (7.28): \begin{equation} \label{Eq:III:7:29} \Delta\biggl(\frac{p^2}{2M}\biggr)=\frac{p}{M}\,\Delta p= -\Delta V. \end{equation} The wave number $p/\hbar$ is, therefore, different along the two paths, which means that the phase is advancing at a different rate. The difference in the rate of increase of phase is $\Delta k=\Delta p/\hbar$, so the accumulated phase difference in the total distance $w$ is \begin{equation} \label{Eq:III:7:30} \Delta(\text{phase})=\Delta k\cdot w= \frac{\Delta p}{\hbar}\cdot w= -\frac{M}{p\hbar}\,\Delta V\cdot w. \end{equation} This is the amount by which the phase on path $b$ is “ahead” of the phase on path $a$ as the wave leaves the strip. But outside the strip, a phase advance of this amount corresponds to the wave node being ahead by the amount \begin{equation} \Delta x=\frac{\lambda}{2\pi}\,\Delta(\text{phase})= \frac{\hbar}{p}\,\Delta(\text{phase})\notag \end{equation} or \begin{equation} \label{Eq:III:7:31} \Delta x=-\frac{M}{p^2}\,\Delta V\cdot w. \end{equation} Referring to Fig. 7–8, we see that the new wavefronts will be at the angle $\delta\theta$ given by \begin{equation} \label{Eq:III:7:32} \Delta x=D\,\delta\theta; \end{equation} so we have \begin{equation} \label{Eq:III:7:33} D\,\delta\theta=-\frac{M}{p^2}\,\Delta V\cdot w. \end{equation} This is identical to Eq. (7.26) if we replace $p/M$ by $v$ and $\Delta V/D$ by $\ddpl{V}{y}$. The result we have just got is correct only if the potential variations are slow and smooth—in what we call the classical limit. We have shown that under these conditions we will get the same particle motions we get from $F=ma$, provided we assume that a potential contributes a phase to the probability amplitude equal to $Vt/\hbar$. In the classical limit, the quantum mechanics will agree with Newtonian mechanics. |
|
3 | 7 | The Dependence of Amplitudes on Time | 5 | The “precession” of a spin one-half particle | Notice that we have not assumed anything special about the potential energy—it is just that energy whose derivative gives a force. For instance, in the Stern-Gerlach experiment we had the energy $U=-\FLPmu\cdot\FLPB$, which gives a force if $\FLPB$ has a spatial variation. If we wanted to give a quantum mechanical description, we would have said that the particles in one beam had an energy that varied one way and that those in the other beam had an opposite energy variation. (We could put the magnetic energy $U$ into the potential energy $V$ or into the “internal” energy $W$; it doesn’t matter.) Because of the energy variation, the waves are refracted, and the beams are bent up or down. (We see now that quantum mechanics would give us the same bending as we would compute from the classical mechanics.) From the dependence of the amplitude on potential energy we would also expect that if a particle sits in a uniform magnetic field along the $z$-direction, its probability amplitude must be changing with time according to \begin{equation*} e^{-(i/\hbar)(-\mu_zB)t}. \end{equation*} (We can consider that this is, in effect, a definition of $\mu_z$.) In other words, if we place a particle in a uniform field $B$ for a time $\tau$, its probability amplitude will be multiplied by \begin{equation*} e^{-(i/\hbar)(-\mu_zB)\tau} \end{equation*} over what it would be in no field. Since for a spin one-half particle, $\mu_z$ can be either plus or minus some number, say $\mu$, the two possible states in a uniform field would have their phases changing at the same rate but in opposite directions. The two amplitudes get multiplied by \begin{equation} \label{Eq:III:7:34} e^{\pm(i/\hbar)\mu B\tau}. \end{equation} This result has some interesting consequences. Suppose we have a spin one-half particle in some state that is not purely spin up or spin down. We can describe its condition in terms of the amplitudes to be in the pure up and pure down states. But in a magnetic field, these two states will have phases changing at a different rate. So if we ask some question about the amplitudes, the answer will depend on how long it has been in the field. As an example, we consider the disintegration of the muon in a magnetic field. When muons are produced as disintegration products of $\pi$-mesons, they are polarized (in other words, they have a preferred spin direction). The muons, in turn, disintegrate—in about $2.2$ microseconds on the average—emitting an electron and two neutrinos: \begin{equation*} \mu\to e+\nu+\bar{\nu}. \end{equation*} In this disintegration it turns out that (for at least the highest energies) the electrons are emitted preferentially in the direction opposite to the spin direction of the muon. Suppose then that we consider the experimental arrangement shown in Fig. 7–9. If polarized muons enter from the left and are brought to rest in a block of material at $A$, they will, a little while later, disintegrate. The electrons emitted will, in general, go off in all possible directions. Suppose, however, that the muons all enter the stopping block at $A$ with their spins in the $x$-direction. Without a magnetic field there would be some angular distribution of decay directions; we would like to know how this distribution is changed by the presence of the magnetic field. We expect that it may vary in some way with time. We can find out what happens by asking, for any moment, what the amplitude is that the muon will be found in the $(+x)$ state. We can state the problem in the following way: A muon is known to have its spin in the $+x$-direction at $t=0$; what is the amplitude that it will be in the same state at the time $\tau$? Now we do not have any rule for the behavior of a spin one-half particle in a magnetic field at right angles to the spin, but we do know what happens to the spin up and spin down states with respect to the field—their amplitudes get multiplied by the factor (7.34). Our procedure then is to choose the representation in which the base states are spin up and spin down with respect to the $z$-direction (the field direction). Any question can then be expressed with reference to the amplitudes for these states. Let’s say that $\psi(t)$ represents the muon state. When it enters the block $A$, its state is $\psi(0)$, and we want to know $\psi(\tau)$ at the later time $\tau$. If we represent the two base states by $(+z)$ and $(-z)$ we know the two amplitudes $\braket{+z}{\psi(0)}$ and $\braket{-z}{\psi(0)}$—we know these amplitudes because we know that $\psi(0)$ represents a state with the spin in the $(+x)$ state. From the results of the last chapter, these amplitudes are2 \begin{equation} \begin{aligned} \braket{+z}{+x}&=C_+=\frac{1}{\sqrt{2}},\\[1ex] \braket{-z}{+x}&=C_-=\frac{1}{\sqrt{2}}. \end{aligned} \label{Eq:III:7:35} \end{equation} They happen to be equal. Since these amplitudes refer to the condition at $t=0$, let’s call them $C_+(0)$ and $C_-(0)$. Now we know what happens to these two amplitudes with time. Using (7.34), we have \begin{equation} \begin{aligned} C_+(t)&=C_+(0)e^{-(i/\hbar)\mu Bt},\\[1ex] C_-(t)&=C_-(0)e^{+(i/\hbar)\mu Bt}. \end{aligned} \label{Eq:III:7:36} \end{equation} But if we know $C_+(t)$ and $C_-(t)$, we have all there is to know about the condition at $t$. The only trouble is that what we want to know is the probability that at $t$ the spin will be in the $+x$-direction. Our general rules can, however, take care of this problem. We write that the amplitude to be in the $(+x)$ state at time $t$, which we may call $A_+(t)$, is \begin{equation*} A_+(t)=\braket{+x}{\psi(t)}= \braket{+x}{+z}\braket{+z}{\psi(t)}+ \braket{+x}{-z}\braket{-z}{\psi(t)} \end{equation*}
\begin{align*} A_+(t)&=\braket{+x}{\psi(t)}\\[.5ex] &=\braket{+x}{+z}\braket{+z}{\psi(t)}+ \braket{+x}{-z}\braket{-z}{\psi(t)}\notag \end{align*} or \begin{equation} \label{Eq:III:7:37} A_+(t)=\braket{+x}{+z}C_+(t)+\braket{+x}{-z}C_-(t). \end{equation} Again using the results of the last chapter—or better the equality $\braket{\phi}{\chi}=\braket{\chi}{\phi}\cconj$ from Chapter 5—we know that \begin{equation*} \braket{+x}{+z}=\frac{1}{\sqrt{2}},\quad \braket{+x}{-z}=\frac{1}{\sqrt{2}}. \end{equation*} So we know all the quantities in Eq. (7.37). We get \begin{equation*} A_+(t)=\tfrac{1}{2}e^{(i/\hbar)\mu Bt}+ \tfrac{1}{2}e^{-(i/\hbar)\mu Bt}, \end{equation*} or \begin{equation*} A_+(t)=\cos\frac{\mu B}{\hbar}\,t. \end{equation*} A particularly simple result! Notice that the answer agrees with what we expect for $t=0$. We get $A_+(0)=1$, which is right, because we assumed that the muon was in the $(+x)$ state at $t=0$. The probability $P_+$ that the muon will be found in the $(+x)$ state at $t$ is $(A_+)^2$ or \begin{equation*} P_+=\cos^2\frac{\mu Bt}{\hbar}. \end{equation*} The probability oscillates between zero and one, as shown in Fig. 7–10. Note that the probability returns to one for $\mu Bt/\hbar=\pi$ (not $2\pi$). Because we have squared the cosine function, the probability repeats itself with the frequency $2\mu B/\hbar$. Thus, we find that the chance of catching the decay electron in the electron counter of Fig. 7–9 varies periodically with the length of time the muon has been sitting in the magnetic field. The frequency depends on the magnetic moment $\mu$. The magnetic moment of the muon has, in fact, been measured in just this way. We can, of course, use the same method to answer any other questions about the muon decay. For example, how does the chance of detecting a decay electron in the $y$-direction at $90^\circ$ to the $x$-direction but still at right angles to the field depend on $t$? If you work it out, the probability to be in the $(+y)$ state varies as $\cos^2\{(\mu Bt/\hbar)-\pi/4\}$, which oscillates with the same period but reaches its maximum one-quarter cycle later, when $\mu Bt/\hbar=\pi/4$. In fact, what is happening is that as time goes on, the muon goes through a succession of states which correspond to complete polarization in a direction that is continually rotating about the $z$-axis. We can describe this by saying that the spin is precessing at the frequency \begin{equation} \label{Eq:III:7:38} \omega_p=\frac{2\mu B}{\hbar}. \end{equation} You can begin to see the form that our quantum mechanical description will take when we are describing how things behave in time. |
|
3 | 8 | The Hamiltonian Matrix | 1 | Amplitudes and vectors | Before we begin the main topic of this chapter, we would like to describe a number of mathematical ideas that are used a lot in the literature of quantum mechanics. Knowing them will make it easier for you to read other books or papers on the subject. The first idea is the close mathematical resemblance between the equations of quantum mechanics and those of the scalar product of two vectors. You remember that if $\chi$ and $\phi$ are two states, the amplitude to start in $\phi$ and end up in $\chi$ can be written as a sum over a complete set of base states of the amplitude to go from $\phi$ into one of the base states and then from that base state out again into $\chi$: \begin{equation} \label{Eq:III:8:1} \braket{\chi}{\phi}= \sum_{\text{all $i$}}\braket{\chi}{i}\braket{i}{\phi}. \end{equation} We explained this in terms of a Stern-Gerlach apparatus, but we remind you that there is no need to have the apparatus. Equation (8.1) is a mathematical law that is just as true whether we put the filtering equipment in or not—it is not always necessary to imagine that the apparatus is there. We can think of it simply as a formula for the amplitude $\braket{\chi}{\phi}$. We would like to compare Eq. (8.1) to the formula for the dot product of two vectors $\FLPB$ and $\FLPA$. If $\FLPB$ and $\FLPA$ are ordinary vectors in three dimensions, we can write the dot product this way: \begin{equation} \label{Eq:III:8:2} \sum_{\text{all $i$}}(\FLPB\cdot\FLPe_i)(\FLPe_i\cdot\FLPA), \end{equation} with the understanding that the symbol $\FLPe_i$ stands for the three unit vectors in the $x$, $y$, and $z$-directions. Then $\FLPB\cdot\FLPe_1$ is what we ordinarily call $B_x$; $\FLPB\cdot\FLPe_2$ is what we ordinarily call $B_y$; and so on. So Eq. (8.2) is equivalent to \begin{equation*} B_xA_x+B_yA_y+B_zA_z, \end{equation*} which is the dot product $\FLPB\cdot\FLPA$. Comparing Eqs. (8.1) and (8.2), we can see the following analogy: The states $\chi$ and $\phi$ correspond to the two vectors $\FLPB$ and $\FLPA$. The base states $i$ correspond to the special vectors $\FLPe_i$ to which we refer all other vectors. Any vector can be represented as a linear combination of the three “base vectors” $\FLPe_i$. Furthermore, if you know the coefficients of each “base vector” in this combination—that is, its three components—you know everything about a vector. In a similar way, any quantum mechanical state can be described completely by the amplitude $\braket{i}{\phi}$ to go into the base states; and if you know these coefficients, you know everything there is to know about the state. Because of this close analogy, what we have called a “state” is often also called a “state vector.” Since the base vectors $\FLPe_i$ are all at right angles, we have the relation \begin{equation} \label{Eq:III:8:3} \FLPe_i\cdot\FLPe_j=\delta_{ij}. \end{equation} This corresponds to the relations (5.25) among the base states $i$, \begin{equation} \label{Eq:III:8:4} \braket{i}{j}=\delta_{ij}. \end{equation} You see now why one says that the base states $i$ are all “orthogonal.” There is one minor difference between Eq. (8.1) and the dot product. We have that \begin{equation} \label{Eq:III:8:5} \braket{\phi}{\chi}=\braket{\chi}{\phi}\cconj. \end{equation} But in vector algebra \begin{equation*} \FLPA\cdot\FLPB=\FLPB\cdot\FLPA.\notag \end{equation*} With the complex numbers of quantum mechanics we have to keep straight the order of the terms, whereas in the dot product, the order doesn’t matter. Now consider the following vector equation: \begin{equation} \label{Eq:III:8:6} \FLPA=\sum_i\FLPe_i(\FLPe_i\cdot\FLPA). \end{equation} It’s a little unusual, but correct. It means the same thing as \begin{equation} \label{Eq:III:8:7} \FLPA=\sum_iA_i\FLPe_i=A_x\FLPe_x+A_y\FLPe_y+A_z\FLPe_z. \end{equation} Notice, though, that Eq. (8.6) involves a quantity which is different from a dot product. A dot product is just a number, whereas Eq. (8.6) is a vector equation. One of the great tricks of vector analysis was to abstract away from the equations the idea of a vector itself. One might be similarly inclined to abstract a thing that is the analog of a “vector” from the quantum mechanical formula Eq. (8.1)—and one can indeed. We remove the $\bra{\chi}$ from both sides of Eq. (8.1) and write the following equation (don’t get frightened—it’s just a notation and in a few minutes you will find out what the symbols mean): \begin{equation} \label{Eq:III:8:8} \ket{\phi}=\sum_i\ket{i}\braket{i}{\phi}. \end{equation} One thinks of the bracket $\braket{\chi}{\phi}$ as being divided into two pieces. The second piece $\ket{\phi}$ is often called a ket, and the first piece $\bra{\chi}$ is called a bra (put together, they make a “bra-ket”—a notation proposed by Dirac); the half-symbols $\ket{\phi}$ and $\bra{\chi}$ are also called state vectors. In any case, they are not numbers, and, in general, we want the results of our calculations to come out as numbers; so such “unfinished” quantities are only part-way steps in our calculations. It happens that until now we have written all our results in terms of numbers. How have we managed to avoid vectors? It is amusing to note that even in ordinary vector algebra we could make all equations involve only numbers. For instance, instead of a vector equation like \begin{equation*} \FLPF=m\FLPa, \end{equation*} we could always have written \begin{equation*} \FLPC\cdot\FLPF=\FLPC\cdot(m\FLPa). \end{equation*} We have then an equation between dot products that is true for any vector $\FLPC$. But if it is true for any $\FLPC$, it hardly makes sense at all to keep writing the $\FLPC$! Now look at Eq. (8.1). It is an equation that is true for any $\chi$. So to save writing, we should just leave out the $\chi$ and write Eq. (8.8) instead. It has the same information provided we understand that it should always be “finished” by “multiplying on the left by”—which simply means reinserting—some $\bra{\chi}$ on both sides. So Eq. (8.8) means exactly the same thing as Eq. (8.1)—no more, no less. When you want numbers, you put in the $\bra{\chi}$ you want. Maybe you have already wondered about the $\phi$ in Eq. (8.8). Since the equation is true for any $\phi$, why do we keep it? Indeed, Dirac suggests that the $\phi$ also can just as well be abstracted away, so that we have only \begin{equation} \label{Eq:III:8:9} |=\sum_i\ket{i}\bra{i}. \end{equation} And this is the great law of quantum mechanics! (There is no analog in vector analysis.) It says that if you put in any two states $\chi$ and $\phi$ on the left and right of both sides, you get back Eq. (8.1). It is not really very useful, but it’s a nice reminder that the equation is true for any two states. |
|
3 | 8 | The Hamiltonian Matrix | 2 | Resolving state vectors | Let’s look at Eq. (8.8) again; we can think of it in the following way. Any state vector $\ket{\phi}$ can be represented as a linear combination with suitable coefficients of a set of base “vectors”—or, if you prefer, as a superposition of “unit vectors” in suitable proportions. To emphasize that the coefficients $\braket{i}{\phi}$ are just ordinary (complex) numbers, suppose we write \begin{equation} \braket{i}{\phi}=C_i.\notag \end{equation} Then Eq. (8.8) is the same as \begin{equation} \label{Eq:III:8:10} \ket{\phi}=\sum_i\ket{i}C_i. \end{equation} We can write a similar equation for any other state vector, say $\ket{\chi}$, with, of course, different coefficients—say $D_i$. Then we have \begin{equation} \label{Eq:III:8:11} \ket{\chi}=\sum_i\ket{i}D_i. \end{equation} The $D_i$ are just the amplitudes $\braket{i}{\chi}$. Suppose we had started by abstracting the $\phi$ from Eq. (8.1). We would have had \begin{equation} \label{Eq:III:8:12} \bra{\chi}=\sum_i\braket{\chi}{i}\bra{i}. \end{equation} Remembering that $\braket{\chi}{i}=\braket{i}{\chi}\cconj$, we can write this as \begin{equation} \label{Eq:III:8:13} \bra{\chi}=\sum_iD_i\cconj\,\bra{i}. \end{equation} Now the interesting thing is that we can just multiply Eq. (8.13) and Eq. (8.10) to get back $\braket{\chi}{\phi}$. When we do that, we have to be careful of the summation indices, because they are quite distinct in the two equations. Let’s first rewrite Eq. (8.13) as \begin{equation*} \bra{\chi}=\sum_jD_j\cconj\,\bra{j}, \end{equation*} which changes nothing. Then putting it together with Eq. (8.10), we have \begin{equation} \label{Eq:III:8:14} \braket{\chi}{\phi}=\sum_{ij}D_j\cconj\,\braket{j}{i}C_i. \end{equation} Remember, though, that $\braket{j}{i}=\delta_{ij}$, so that in the sum we have left only the terms with $j=i$. We get \begin{equation} \label{Eq:III:8:15} \braket{\chi}{\phi}=\sum_{i}D_i\cconj\,C_i, \end{equation} where, of course, $D_i\cconj=$ $\braket{i}{\chi}\cconj=$ $\braket{\chi}{i}$, and $C_i=\braket{i}{\phi}$. Again we see the close analogy with the dot product \begin{equation*} \FLPB\cdot\FLPA=\sum_iB_iA_i. \end{equation*} The only difference is the complex conjugate on $D_i$. So Eq. (8.15) says that if the state vectors $\bra{\chi}$ and $\ket{\phi}$ are expanded in terms of the base vectors $\bra{i}$ or $\ket{i}$, the amplitude to go from $\phi$ to $\chi$ is given by the kind of dot product in Eq. (8.15). This equation is, of course, just Eq. (8.1) written with different symbols. So we have just gone in a circle to get used to the new symbols. We should perhaps emphasize again that while space vectors in three dimensions are described in terms of three orthogonal unit vectors, the base vectors $\ket{i}$ of the quantum mechanical states must range over the complete set applicable to any particular problem. Depending on the situation, two, or three, or five, or an infinite number of base states may be involved. We have also talked about what happens when particles go through an apparatus. If we start the particles out in a certain state $\phi$, then send them through an apparatus, and afterward make a measurement to see if they are in state $\chi$, the result is described by the amplitude \begin{equation} \label{Eq:III:8:16} \bracket{\chi}{A}{\phi}. \end{equation} Such a symbol doesn’t have a close analog in vector algebra. (It is closer to tensor algebra, but the analogy is not particularly useful.) We saw in Chapter 5, Eq. (5.32), that we could write (8.16) as \begin{equation} \label{Eq:III:8:17} \bracket{\chi}{A}{\phi}= \sum_{ij}\braket{\chi}{i}\bracket{i}{A}{j}\braket{j}{\phi}. \end{equation} This is just an example of the fundamental rule Eq. (8.9), used twice. We also found that if another apparatus $B$ was added in series with $A$, then we could write \begin{equation} \label{Eq:III:8:18} \bracket{\chi}{BA}{\phi}= \sum_{ijk}\braket{\chi}{i}\bracket{i}{B}{j} \bracket{j}{A}{k}\braket{k}{\phi}. \end{equation} Again, this comes directly from Dirac’s method of writing Eq. (8.9)—remember that we can always place a bar ($|$), which is just like the factor $1$, between $B$ and $A$. Incidentally, we can think of Eq. (8.17) in another way. Suppose we think of the particle entering apparatus $A$ in the state $\phi$ and coming out of $A$ in the state $\psi$, (“psi”). In other words, we could ask ourselves this question: Can we find a $\psi$ such that the amplitude to get from $\psi$ to $\chi$ is always identically and everywhere the same as the amplitude $\bracket{\chi}{A}{\phi}$? The answer is yes. We want Eq. (8.17) to be replaced by \begin{equation} \label{Eq:III:8:19} \braket{\chi}{\psi}= \sum_i\braket{\chi}{i}\braket{i}{\psi}. \end{equation} We can clearly do this if \begin{equation} \label{Eq:III:8:20} \braket{i}{\psi}=\sum_j\bracket{i}{A}{j}\braket{j}{\phi}= \bracket{i}{A}{\phi}, \end{equation} which determines $\psi$. “But it doesn’t determine $\psi$,” you say; “it only determines $\braket{i}{\psi}$.” However, $\braket{i}{\psi}$ does determine $\psi$, because if you have all the coefficients that relate $\psi$ to the base states $i$, then $\psi$ is uniquely defined. In fact, we can play with our notation and write the last term of Eq. (8.20) as \begin{equation} \label{Eq:III:8:21} \braket{i}{\psi}=\sum_j\braket{i}{j}\bracket{j}{A}{\phi}. \end{equation} Then, since this equation is true for all $i$, we can write simply \begin{equation} \label{Eq:III:8:22} \ket{\psi}=\sum_j\ket{j}\bracket{j}{A}{\phi}. \end{equation} Then we can say: “The state $\psi$ is what we get if we start with $\phi$ and go through the apparatus $A$.” One final example of the tricks of the trade. We start again with Eq. (8.17). Since it is true for any $\chi$ and $\phi$, we can drop them both! We then get1 \begin{equation} \label{Eq:III:8:23} A=\sum_{ij}\ket{i}\bracket{i}{A}{j}\bra{j}. \end{equation} What does it mean? It means no more, no less, than what you get if you put back the $\phi$ and $\chi$. As it stands, it is an “open” equation and incomplete. If we multiply it “on the right” by $\ket{\phi}$, it becomes \begin{equation} \label{Eq:III:8:24} A\,\ket{\phi}=\sum_{ij}\ket{i}\bracket{i}{A}{j}\braket{j}{\phi}, \end{equation} which is just Eq. (8.22) all over again. In fact, we could have just dropped the $j$’s from that equation and written \begin{equation} \label{Eq:III:8:25} \ket{\psi}=A\,\ket{\phi}. \end{equation} The symbol $A$ is neither an amplitude, nor a vector; it is a new kind of thing called an operator. It is something which “operates on” a state to produce a new state—Eq. (8.25) says that $\ket{\psi}$ is what results if $A$ operates on $\ket{\phi}$. Again, it is still an open equation until it is completed with some bra like $\bra{\chi}$ to give \begin{equation} \label{Eq:III:8:26} \braket{\chi}{\psi}=\bracket{\chi}{A}{\phi}. \end{equation} The operator $A$ is, of course, described completely if we give the matrix of amplitudes $\bracket{i}{A}{j}$—also written $A_{ij}$—in terms of any set of base vectors. We have really added nothing new with all of this new mathematical notation. One reason for bringing it all up was to show you the way of writing pieces of equations, because in many books you will find the equations written in the incomplete forms, and there’s no reason for you to be paralyzed when you come across them. If you prefer, you can always add the missing pieces to make an equation between numbers that will look like something more familiar. Also, as you will see, the “bra” and “ket” notation is a very convenient one. For one thing, we can from now on identify a state by giving its state vector. When we want to refer to a state of definite momentum $\FLPp$ we can say: “the state $\ket{\FLPp}$.” Or we may speak of some arbitrary state $\ket{\psi}$. For consistency we will always use the ket, writing $\ket{\psi}$, to identify a state. (It is, of course an arbitrary choice; we could equally well have chosen to use the bra, $\bra{\psi}$.) |
|
3 | 8 | The Hamiltonian Matrix | 3 | What are the base states of the world? | We have discovered that any state in the world can be represented as a superposition—a linear combination with suitable coefficients—of base states. You may ask, first of all, what base states? Well, there are many different possibilities. You can, for instance, project a spin in the $z$-direction or in some other direction. There are many, many different representations, which are the analogs of the different coordinate systems one can use to represent ordinary vectors. Next, what coefficients? Well, that depends on the physical circumstances. Different sets of coefficients correspond to different physical conditions. The important thing to know about is the “space” in which you are working—in other words, what the base states mean physically. So the first thing you have to know about, in general, is what the base states are like. Then you can understand how to describe a situation in terms of these base states. We would like to look ahead a little and speak a bit about what the general quantum mechanical description of nature is going to be—in terms of the now current ideas of physics, anyway. First, one decides on a particular representation for the base states—different representations are always possible. For example, for a spin one-half particle we can use the plus and minus states with respect to the $z$-axis. But there’s nothing special about the $z$-axis—you can take any other axis you like. For consistency we’ll always pick the $z$-axis, however. Suppose we begin with a situation with one electron. In addition to the two possibilities for the spin (“up” and “down” along the $z$-direction), there is also the momentum of the electron. We pick a set of base states, each corresponding to one value of the momentum. What if the electron doesn’t have a definite momentum? That’s all right; we’re just saying what the base states are. If the electron hasn’t got a definite momentum, it has some amplitude to have one momentum and another amplitude to have another momentum, and so on. And if it is not necessarily spinning up, it has some amplitude to be spinning up going at this momentum, and some amplitude to be spinning down going at that momentum, and so on. The complete description of an electron, so far as we know, requires only that the base states be described by the momentum and the spin. So one acceptable set of base states $\ket{i}$ for a single electron refer to different values of the momentum and whether the spin is up or down. Different mixtures of amplitudes—that is, different combinations of the $C$’s describe different circumstances. What any particular electron is doing is described by telling with what amplitude it has an up-spin or a down-spin and one momentum or another—for all possible momenta. So you can see what is involved in a complete quantum mechanical description of a single electron. What about systems with more than one electron? Then the base states get more complicated. Let’s suppose that we have two electrons. We have, first of all, four possible states with respect to spin: both electrons spinning up, the first one down and the second one up, the first one up and the second one down, or both down. Also we have to specify that the first electron has the momentum $p_1$, and the second electron, the momentum $p_2$. The base states for two electrons require the specification of two momenta and two spin characters. With seven electrons, we have to specify seven of each. If we have a proton and an electron, we have to specify the spin direction of the proton and its momentum, and the spin direction of the electron and its momentum. At least that’s approximately true. We do not really know what the correct representation is for the world. It is all very well to start out by supposing that if you specify the spin in the electron and its momentum, and likewise for a proton, you will have the base states; but what about the “guts” of the proton? Let’s look at it this way. In a hydrogen atom which has one proton and one electron, we have many different base states to describe—up and down spins of the proton and electron and the various possible momenta of the proton and electron. Then there are different combinations of amplitudes $C_i$ which together describe the character of the hydrogen atom in different states. But suppose we look at the whole hydrogen atom as a “particle.” If we didn’t know that the hydrogen atom was made out of a proton and an electron, we might have started out and said: “Oh, I know what the base states are—they correspond to a particular momentum of the hydrogen atom.” No, because the hydrogen atom has internal parts. It may, therefore, have various states of different internal energy, and describing the real nature requires more detail. The question is: Does a proton have internal parts? Do we have to describe a proton by giving all possible states of protons, and mesons, and strange particles? We don’t know. And even though we suppose that the electron is simple, so that all we have to tell about it is its momentum and its spin, maybe tomorrow we will discover that the electron also has inner gears and wheels. It would mean that our representation is incomplete, or wrong, or approximate—in the same way that a representation of the hydrogen atom which describes only its momentum would be incomplete, because it disregarded the fact that the hydrogen atom could have become excited inside. If an electron could become excited inside and turn into something else like, for instance, a muon, then it would be described not just by giving the states of the new particle, but presumably in terms of some more complicated internal wheels. The main problem in the study of the fundamental particles today is to discover what are the correct representations for the description of nature. At the present time, we guess that for the electron it is enough to specify its momentum and spin. We also guess that there is an idealized proton which has its $\pi$-mesons, and K-mesons, and so on, that all have to be specified. Several dozen particles—that’s crazy! The question of what is a fundamental particle and what is not a fundamental particle—a subject you hear so much about these days—is the question of what is the final representation going to look like in the ultimate quantum mechanical description of the world. Will the electron’s momentum still be the right thing with which to describe nature? Or even, should the whole question be put this way at all! This question must always come up in any scientific investigation. At any rate, we see a problem—how to find a representation. We don’t know the answer. We don’t even know whether we have the “right” problem, but if we do, we must first attempt to find out whether any particular particle is “fundamental” or not. In the nonrelativistic quantum mechanics—if the energies are not too high, so that you don’t disturb the inner workings of the strange particles and so forth—you can do a pretty good job without worrying about these details. You can just decide to specify the momenta and spins of the electrons and of the nuclei; then everything will be all right. In most chemical reactions and other low-energy happenings, nothing goes on in the nuclei; they don’t get excited. Furthermore, if a hydrogen atom is moving slowly and bumping quietly against other hydrogen atoms—never getting excited inside, or radiating, or anything complicated like that, but staying always in the ground state of energy for internal motion—you can use an approximation in which you talk about the hydrogen atom as one object, or particle, and not worry about the fact that it can do something inside. This will be a good approximation as long as the kinetic energy in any collision is well below $10$ electron volts—the energy required to excite the hydrogen atom to a different internal state. We will often be making an approximation in which we do not include the possibility of inner motion, thereby decreasing the number of details that we have to put into our base states. Of course, we then omit some phenomena which would appear (usually) at some higher energy, but by making such approximations we can simplify very much the analysis of physical problems. For example, we can discuss the collision of two hydrogen atoms at low energy—or any chemical process—without worrying about the fact that the atomic nuclei could be excited. To summarize, then, when we can neglect the effects of any internal excited states of a particle we can choose a base set which are the states of definite momentum and $z$-component of angular momentum. One problem then in describing nature is to find a suitable representation for the base states. But that’s only the beginning. We still want to be able to say what “happens.” If we know the “condition” of the world at one moment, we would like to know the condition at a later moment. So we also have to find the laws that determine how things change with time. We now address ourselves to this second part of the framework of quantum mechanics—how states change with time. |
|
3 | 8 | The Hamiltonian Matrix | 4 | How states change with time | We have already talked about how we can represent a situation in which we put something through an apparatus. Now one convenient, delightful “apparatus” to consider is merely a wait of a few minutes; that is, you prepare a state $\phi$, and then before you analyze it, you just let it sit. Perhaps you let it sit in some particular electric or magnetic field—it depends on the physical circumstances in the world. At any rate, whatever the conditions are, you let the object sit from time $t_1$ to time $t_2$. Suppose that it is let out of your first apparatus in the condition $\phi$ at $t_1$. And then it goes through an “apparatus,” but the “apparatus” consists of just delay until $t_2$. During the delay, various things could be going on—external forces applied or other shenanigans—so that something is happening. At the end of the delay, the amplitude to find the thing in some state $\chi$ is no longer exactly the same as it would have been without the delay. Since “waiting” is just a special case of an “apparatus,” we can describe what happens by giving an amplitude with the same form as Eq. (8.17). Because the operation of “waiting” is especially important, we’ll call it $U$ instead of $A$, and to specify the starting and finishing times $t_1$ and $t_2$, we’ll write $U(t_2,t_1)$. The amplitude we want is \begin{equation} \label{Eq:III:8:27} \bracket{\chi}{U(t_2,t_1)}{\phi}. \end{equation} Like any other such amplitude, it can be represented in some base system or other by writing it \begin{equation} \label{Eq:III:8:28} \sum_{ij}\braket{\chi}{i}\bracket{i}{U(t_2,t_1)}{j} \braket{j}{\phi}. \end{equation} Then $U$ is completely described by giving the whole set of amplitudes—the matrix \begin{equation} \label{Eq:III:8:29} \bracket{i}{U(t_2,t_1)}{j}. \end{equation} We can point out, incidentally, that the matrix $\bracket{i}{U(t_2,t_1)}{j}$ gives much more detail than may be needed. The high-class theoretical physicist working in high-energy physics considers problems of the following general nature (because it’s the way experiments are usually done). He starts with a couple of particles, like a proton and a proton, coming together from infinity. (In the lab, usually one particle is standing still, and the other comes from an accelerator that is practically at infinity on atomic level.) The things go crash and out come, say, two K-mesons, six $\pi$-mesons, and two neutrons in certain directions with certain momenta. What’s the amplitude for this to happen? The mathematics looks like this: The $\phi$-state specifies the spins and momenta of the incoming particles. The $\chi$ would be the question about what comes out. For instance, with what amplitude do you get the six mesons going in such-and-such directions, and the two neutrons going off in these directions, with their spins so-and-so. In other words, $\chi$ would be specified by giving all the momenta, and spins, and so on of the final products. Then the job of the theorist is to calculate the amplitude (8.27). However, he is really only interested in the special case that $t_1$ is $-\infty$ and $t_2$ is $+\infty$. (There is no experimental evidence on the details of the process, only on what comes in and what goes out.) The limiting case of $U(t_2,t_1)$ as $t_1\to-\infty$ and $t_2\to+\infty$ is called $S$, and what he wants is \begin{equation*} \bracket{\chi}{S}{\phi}. \end{equation*} Or, using the form (8.28), he would calculate the matrix \begin{equation*} \bracket{i}{S}{j}, \end{equation*} which is called the $S$-matrix. So if you see a theoretical physicist pacing the floor and saying, “All I have to do is calculate the $S$-matrix,” you will know what he is worried about. How to analyze—how to specify the laws for—the $S$-matrix is an interesting question. In relativistic quantum mechanics for high energies, it is done one way, but in nonrelativistic quantum mechanics it can be done another way, which is very convenient. (This other way can also be done in the relativistic case, but then it is not so convenient.) It is to work out the $U$-matrix for a small interval of time—in other words for $t_2$ and $t_1$ close together. If we can find a sequence of such $U$’s for successive intervals of time we can watch how things go as a function of time. You can appreciate immediately that this way is not so good for relativity, because you don’t want to have to specify how everything looks “simultaneously” everywhere. But we won’t worry about that—we’re just going to worry about nonrelativistic mechanics. Suppose we think of the matrix $U$ for a delay from $t_1$ until $t_3$ which is greater than $t_2$. In other words, let’s take three successive times: $t_1$ less than $t_2$ less than $t_3$. Then we claim that the matrix that goes between $t_1$ and $t_3$ is the product in succession of what happens when you delay from $t_1$ until $t_2$ and then from $t_2$ until $t_3$. It’s just like the situation when we had two apparatuses $B$ and $A$ in series. We can then write, following the notation of Section 5–6, \begin{equation} \label{Eq:III:8:30} U(t_3,t_1)=U(t_3,t_2)\cdot U(t_2,t_1). \end{equation} In other words, we can analyze any time interval if we can analyze a sequence of short time intervals in between. We just multiply together all the pieces; that’s the way that quantum mechanics is analyzed nonrelativistically. Our problem, then, is to understand the matrix $U(t_2,t_1)$ for an infinitesimal time interval—for $t_2=t_1+\Delta t$. We ask ourselves this: If we have a state $\phi$ now, what does the state look like an infinitesimal time $\Delta t$ later? Let’s see how we write that out. Call the state at the time $t$, $\ket{\psi(t)}$ (we show the time dependence of $\psi$ to be perfectly clear that we mean the condition at the time $t$). Now we ask the question: What is the condition after the small interval of time $\Delta t$ later? The answer is \begin{equation} \label{Eq:III:8:31} \ket{\psi(t+\Delta t)}=U(t+\Delta t,t)\,\ket{\psi(t)}. \end{equation} This means the same as we meant by (8.25), namely, that the amplitude to find $\chi$ at the time $t+\Delta t$, is \begin{equation} \label{Eq:III:8:32} \braket{\chi}{\psi(t+\Delta t)}=\bracket{\chi}{U(t+\Delta t,t)}{\psi(t)}. \end{equation} Since we’re not yet too good at these abstract things, let’s project our amplitudes into a definite representation. If we multiply both sides of Eq. (8.31) by $\bra{i}$, we get \begin{equation} \label{Eq:III:8:33} \braket{i}{\psi(t+\Delta t)}=\bracket{i}{U(t+\Delta t,t)}{\psi(t)}. \end{equation} We can also resolve the $\ket{\psi(t)}$ into base states and write \begin{equation} \label{Eq:III:8:34} \braket{i}{\psi(t+\Delta t)}=\sum_j \bracket{i}{U(t+\Delta t,t)}{j}\braket{j}{\psi(t)}. \end{equation} We can understand Eq. (8.34) in the following way. If we let $C_i(t)=\braket{i}{\psi(t)}$ stand for the amplitude to be in the base state $i$ at the time $t$, then we can think of this amplitude (just a number, remember!) varying with time. Each $C_i$ becomes a function of $t$. And we also have some information on how the amplitudes $C_i$ vary with time. Each amplitude at $(t+\Delta t)$ is proportional to all of the other amplitudes at $t$ multiplied by a set of coefficients. Let’s call the $U$-matrix $U_{ij}$, by which we mean \begin{equation*} U_{ij}=\bracket{i}{U}{j}. \end{equation*} Then we can write Eq. (8.34) as \begin{equation} \label{Eq:III:8:35} C_i(t+\Delta t)=\sum_jU_{ij}(t+\Delta t,t)C_j(t). \end{equation} This, then, is how the dynamics of quantum mechanics is going to look. We don’t know much about the $U_{ij}$ yet, except for one thing. We know that if $\Delta t$ goes to zero, nothing can happen—we should get just the original state. So, $U_{ii}\to1$ and $U_{ij}\to0$, if $i\neq j$. In other words, $U_{ij}\to\delta_{ij}$ for $\Delta t\to0$. Also, we can suppose that for small $\Delta t$, each of the coefficients $U_{ij}$ should differ from $\delta_{ij}$ by amounts proportional to $\Delta t$; so we can write \begin{equation} \label{Eq:III:8:36} U_{ij}=\delta_{ij}+K_{ij}\,\Delta t. \end{equation} However, it is usual to take the factor $(-i/\hbar)$2 out of the coefficients $K_{ij}$, for historical and other reasons; we prefer to write \begin{equation} \label{Eq:III:8:37} U_{ij}(t+\Delta t,t)=\delta_{ij}-\frac{i}{\hbar}\,H_{ij}(t)\,\Delta t. \end{equation} It is, of course, the same as Eq. (8.36) and, if you wish, just defines the coefficients $H_{ij}(t)$. The terms $H_{ij}$ are just the derivatives with respect to $t_2$ of the coefficients $U_{ij}(t_2,t_1)$, evaluated at $t_2=t_1=t$. Using this form for $U$ in Eq. (8.35), we have \begin{equation} \label{Eq:III:8:38} C_i(t+\Delta t)=\sum_j\biggl[ \delta_{ij}-\frac{i}{\hbar}\,H_{ij}(t)\,\Delta t \biggr]C_j(t). \end{equation} Taking the sum over the $\delta_{ij}$ term, we get just $C_i(t)$, which we can put on the other side of the equation. Then dividing by $\Delta t$, we have what we recognize as a derivative \begin{equation} \frac{C_i(t+\Delta t)-C_i(t)}{\Delta t}= -\frac{i}{\hbar}\sum_jH_{ij}(t)C_j(t)\notag \end{equation} or \begin{equation} \label{Eq:III:8:39} i\hbar\,\ddt{C_i(t)}{t}=\sum_jH_{ij}(t)C_j(t). \end{equation} You remember that $C_i(t)$ is the amplitude $\braket{i}{\psi}$ to find the state $\psi$ in one of the base states $i$ (at the time $t$). So Eq. (8.39) tells us how each of the coefficients $\braket{i}{\psi}$ varies with time. But that is the same as saying that Eq. (8.39) tells us how the state $\psi$ varies with time, since we are describing $\psi$ in terms of the amplitudes $\braket{i}{\psi}$. The variation of $\psi$ in time is described in terms of the matrix $H_{ij}$, which has to include, of course, the things we are doing to the system to cause it to change. If we know the $H_{ij}$—which contains the physics of the situation and can, in general, depend on the time—we have a complete description of the behavior in time of the system. Equation (8.39) is then the quantum mechanical law for the dynamics of the world. (We should say that we will always take a set of base states which are fixed and do not vary with time. There are people who use base states that also vary. However, that’s like using a rotating coordinate system in mechanics, and we don’t want to get involved in such complications.) |
|
3 | 8 | The Hamiltonian Matrix | 5 | The Hamiltonian matrix | The idea, then, is that to describe the quantum mechanical world we need to pick a set of base states $i$ and to write the physical laws by giving the matrix of coefficients $H_{ij}$. Then we have everything—we can answer any question about what will happen. So we have to learn what the rules are for finding the $H$’s to go with any physical situation—what corresponds to a magnetic field, or an electric field, and so on. And that’s the hardest part. For instance, for the new strange particles, we have no idea what $H_{ij}$’s to use. In other words, no one knows the complete $H_{ij}$ for the whole world. (Part of the difficulty is that one can hardly hope to discover the $H_{ij}$ when no one even knows what the base states are!) We do have excellent approximations for nonrelativistic phenomena and for some other special cases. In particular, we have the forms that are needed for the motions of electrons in atoms—to describe chemistry. But we don’t know the full true $H$ for the whole universe. The coefficients $H_{ij}$ are called the Hamiltonian matrix or, for short, just the Hamiltonian. (How Hamilton, who worked in the 1830s, got his name on a quantum mechanical matrix is a tale of history.) It would be much better called the energy matrix, for reasons that will become apparent as we work with it. So the problem is: Know your Hamiltonian! The Hamiltonian has one property that can be deduced right away, namely, that \begin{equation} \label{Eq:III:8:40} H_{ij}\cconj=H_{ji}. \end{equation} This follows from the condition that the total probability that the system is in some state does not change. If you start with a particle—an object or the world—then you’ve still got it as time goes on. The total probability of finding it somewhere is \begin{equation*} \sum_i\abs{C_i(t)}^2, \end{equation*} which must not vary with time. If this is to be true for any starting condition $\phi$, then Eq. (8.40) must also be true. As our first example, we take a situation in which the physical circumstances are not changing with time; we mean the external physical conditions, so that $H$ is independent of time. Nobody is turning magnets on and off. We also pick a system for which only one base state is required for the description; it is an approximation we could make for a hydrogen atom at rest, or something similar. Equation (8.39) then says \begin{equation} \label{Eq:III:8:41} i\hbar\,\ddt{C_1}{t}=H_{11}C_1. \end{equation} Only one equation—that’s all! And if $H_{11}$ is constant, this differential equation is easily solved to give \begin{equation} \label{Eq:III:8:42} C_1=(\text{const})e^{-(i/\hbar)H_{11}t}. \end{equation} This is the time dependence of a state with a definite energy $E=H_{11}$. You see why $H_{ij}$ ought to be called the energy matrix. It is the generalization of the energy for more complex situations. Next, to understand a little more about what the equations mean, we look at a system which has two base states. Then Eq. (8.39) reads \begin{equation} \begin{aligned} i\hbar\,\ddt{C_1}{t}&=H_{11}C_1+H_{12}C_2,\\[1ex] i\hbar\,\ddt{C_2}{t}&=H_{21}C_1+H_{22}C_2. \end{aligned} \label{Eq:III:8:43} \end{equation} If the $H$’s are again independent of time, you can easily solve these equations. We leave you to try for fun, and we’ll come back and do them later. Yes, you can solve the quantum mechanics without knowing the $H$’s, so long as they are independent of time. |
|
3 | 8 | The Hamiltonian Matrix | 6 | The ammonia molecule | We want now to show you how the dynamical equation of quantum mechanics can be used to describe a particular physical circumstance. We have picked an interesting but simple example in which, by making some reasonable guesses about the Hamiltonian, we can work out some important—and even practical—results. We are going to take a situation describable by two states: the ammonia molecule. The ammonia molecule has one nitrogen atom and three hydrogen atoms located in a plane below the nitrogen so that the molecule has the form of a pyramid, as drawn in Fig. 8–1(a). Now this molecule, like any other, has an infinite number of states. It can spin around any possible axis; it can be moving in any direction; it can be vibrating inside, and so on, and so on. It is, therefore, not a two-state system at all. But we want to make an approximation that all other states remain fixed, because they don’t enter into what we are concerned with at the moment. We will consider only that the molecule is spinning around its axis of symmetry (as shown in the figure), that it has zero translational momentum, and that it is vibrating as little as possible. That specifies all conditions except one: there are still the two possible positions for the nitrogen atom—the nitrogen may be on one side of the plane of hydrogen atoms or on the other, as shown in Fig. 8–1(a) and (b). So we will discuss the molecule as though it were a two-state system. We mean that there are only two states we are going to really worry about, all other things being assumed to stay put. You see, even if we know that it is spinning with a certain angular momentum around the axis and that it is moving with a certain momentum and vibrating in a definite way, there are still two possible states. We will say that the molecule is in the state $\ketsl{\slOne}$ when the nitrogen is “up,” as in Fig. 8–1(a), and is in the state $\ketsl{\slTwo}$ when the nitrogen is “down,” as in (b). The states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$ will be taken as the set of base states for our analysis of the behavior of the ammonia molecule. At any moment, the actual state $\ket{\psi}$ of the molecule can be represented by giving $C_1=\braket{\slOne}{\psi}$, the amplitude to be in state $\ketsl{\slOne}$, and $C_2=\braket{\slTwo}{\psi}$, the amplitude to be in state $\ketsl{\slTwo}$. Then, using Eq. (8.8) we can write the state vector $\ket{\psi}$ as \begin{equation} \ket{\psi} = \ketsl{\slOne}\braket{\slOne}{\psi}+ \ketsl{\slTwo}\braket{\slTwo}{\psi}\notag \end{equation} or \begin{equation} \label{Eq:III:8:44} \ket{\psi} =\ketsl{\slOne}C_1+\ketsl{\slTwo}C_2. \end{equation} Now the interesting thing is that if the molecule is known to be in some state at some instant, it will not be in the same state a little while later. The two $C$-coefficients will be changing with time according to the equations (8.43)—which hold for any two-state system. Suppose, for example, that you had made some observation—or had made some selection of the molecules—so that you know that the molecule is initially in the state $\ketsl{\slOne}$. At some later time, there is some chance that it will be found in state $\ketsl{\slTwo}$. To find out what this chance is, we have to solve the differential equation which tells us how the amplitudes change with time. The only trouble is that we don’t know what to use for the coefficients $H_{ij}$ in Eq. (8.43). There are some things we can say, however. Suppose that once the molecule was in the state $\ketsl{\slOne}$ there was no chance that it could ever get into $\ketsl{\slTwo}$, and vice versa. Then $H_{12}$ and $H_{21}$ would both be zero, and Eq. (8.43) would read \begin{equation*} i\hbar\,\ddt{C_1}{t}=H_{11}C_1,\quad i\hbar\,\ddt{C_2}{t}=H_{22}C_2. \end{equation*} We can easily solve these two equations; we get \begin{equation} \label{Eq:III:8:45} C_1=(\text{const})e^{-(i/\hbar)H_{11}t},\quad C_2=(\text{const})e^{-(i/\hbar)H_{22}t}. \end{equation} These are just the amplitudes for stationary states with the energies $E_1=H_{11}$ and $E_2=H_{22}$. We note, however, that for the ammonia molecule the two states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$ have a definite symmetry. If nature is at all reasonable, the matrix elements $H_{11}$ and $H_{22}$ must be equal. We’ll call them both $E_0$, because they correspond to the energy the states would have if $H_{12}$ and $H_{21}$ were zero. But Eqs. (8.45) do not tell us what ammonia really does. It turns out that it is possible for the nitrogen to push its way through the three hydrogens and flip to the other side. It is quite difficult; to get half-way through requires a lot of energy. How can it get through if it hasn’t got enough energy? There is some amplitude that it will penetrate the energy barrier. It is possible in quantum mechanics to sneak quickly across a region which is illegal energetically. There is, therefore, some small amplitude that a molecule which starts in $\ketsl{\slOne}$ will get to the state $\ketsl{\slTwo}$. The coefficients $H_{12}$ and $H_{21}$ are not really zero. Again, by symmetry, they should both be the same—at least in magnitude. In fact, we already know that, in general, $H_{ij}$ must be equal to the complex conjugate of $H_{ji}$, so they can differ only by a phase. It turns out, as you will see, that there is no loss of generality if we take them equal to each other. For later convenience we set them equal to a negative number; we take $H_{12}=H_{21}=-A$. We then have the following pair of equations: \begin{align} \label{Eq:III:8:46} i\hbar\,\ddt{C_1}{t}&=E_0C_1-AC_2,\\[1ex] \label{Eq:III:8:47} i\hbar\,\ddt{C_2}{t}&=E_0C_2-AC_1. \end{align} These equations are simple enough and can be solved in any number of ways. One convenient way is the following. Taking the sum of the two, we get \begin{equation} i\hbar\,\ddt{}{t}\,(C_1+C_2)=(E_0-A)(C_1+C_2),\notag \end{equation} whose solution is \begin{equation} \label{Eq:III:8:48} C_1+C_2=ae^{-(i/\hbar)(E_0-A)t}. \end{equation} Then, taking the difference of (8.46) and (8.47), we find that \begin{equation} i\hbar\,\ddt{}{t}\,(C_1-C_2)=(E_0+A)(C_1-C_2),\notag \end{equation} which gives \begin{equation} \label{Eq:III:8:49} C_1-C_2=be^{-(i/\hbar)(E_0+A)t}. \end{equation} We have called the two integration constants $a$ and $b$; they are, of course, to be chosen to give the appropriate starting condition for any particular physical problem. Now, by adding and subtracting (8.48) and (8.49), we get $C_1$ and $C_2$: \begin{align} \label{Eq:III:8:50} C_1(t)&=\frac{a}{2}\,e^{-(i/\hbar)(E_0-A)t}+ \frac{b}{2}\,e^{-(i/\hbar)(E_0+A)t},\\[1ex] \label{Eq:III:8:51} C_2(t)&=\frac{a}{2}\,e^{-(i/\hbar)(E_0-A)t}- \frac{b}{2}\,e^{-(i/\hbar)(E_0+A)t}. \end{align} They are the same except for the sign of the second term. We have the solutions; now what do they mean? (The trouble with quantum mechanics is not only in solving the equations but in understanding what the solutions mean!) First, notice that if $b=0$, both terms have the same frequency $\omega=(E_0-A)/\hbar$. If everything changes at one frequency, it means that the system is in a state of definite energy—here, the energy $(E_0-A)$. So there is a stationary state of this energy in which the two amplitudes $C_1$ and $C_2$ are equal. We get the result that the ammonia molecule has a definite energy $(E_0-A)$ if there are equal amplitudes for the nitrogen atom to be “up” and to be “down.” There is another stationary state possible if $a=0$; both amplitudes then have the frequency $(E_0+A)/\hbar$. So there is another state with the definite energy $(E_0+A)$ if the two amplitudes are equal but with the opposite sign; $C_2=-C_1$. These are the only two states of definite energy. We will discuss the states of the ammonia molecule in more detail in the next chapter; we will mention here only a couple of things. We conclude that because there is some chance that the nitrogen atom can flip from one position to the other, the energy of the molecule is not just $E_0$, as we would have expected, but that there are two energy levels $(E_0+A)$ and $(E_0-A)$. Every one of the possible states of the molecule, whatever energy it has, is “split” into two levels. We say every one of the states because, you remember, we picked out one particular state of rotation, and internal energy, and so on. For each possible condition of that kind there is a doublet of energy levels because of the flip-flop of the molecule. Let’s now ask the following question about an ammonia molecule. Suppose that at $t=0$, we know that a molecule is in the state $\ketsl{\slOne}$ or, in other words, that $C_1(0)=1$ and $C_2(0)=0$. What is the probability that the molecule will be found in the state $\ketsl{\slTwo}$ at the time $t$, or will still be found in state $\ketsl{\slOne}$ at the time $t$? Our starting condition tells us what $a$ and $b$ are in Eqs. (8.50) and (8.51). Letting $t=0$, we have that \begin{equation*} C_1(0)=\frac{a+b}{2}=1,\quad C_2(0)=\frac{a-b}{2}=0. \end{equation*} Clearly, $a=b=1$. Putting these values into the formulas for $C_1(t)$ and $C_2(t)$ and rearranging some terms, we have \begin{align*} C_1(t)&=e^{-(i/\hbar)E_0t}\biggl( \frac{e^{(i/\hbar)At}+e^{-(i/\hbar)At}}{2} \biggr),\\[1ex] C_2(t)&=e^{-(i/\hbar)E_0t}\biggl( \frac{e^{(i/\hbar)At}-e^{-(i/\hbar)At}}{2} \biggr). \end{align*} We can rewrite these as \begin{align} \label{Eq:III:8:52} C_1(t)&=\phantom{i}e^{-(i/\hbar)E_0t}\cos\frac{At}{\hbar},\\[1ex] \label{Eq:III:8:53} C_2(t)&=ie^{-(i/\hbar)E_0t}\sin\frac{At}{\hbar}. \end{align} The two amplitudes have a magnitude that varies harmonically with time. The probability that the molecule is found in state $\ketsl{\slTwo}$ at the time $t$ is the absolute square of $C_2(t)$: \begin{equation} \label{Eq:III:8:54} \abs{C_2(t)}^2=\sin^2\frac{At}{\hbar}. \end{equation} The probability starts at zero (as it should), rises to one, and then oscillates back and forth between zero and one, as shown in the curve marked $P_2$ of Fig. 8–2. The probability of being in the $\ketsl{\slOne}$ state does not, of course, stay at one. It “dumps” into the second state until the probability of finding the molecule in the first state is zero, as shown by the curve $P_1$ of Fig. 8–2. The probability sloshes back and forth between the two. A long time ago we saw what happens when we have two equal pendulums with a slight coupling. (See Chapter 49, Vol. I.) When we lift one back and let go, it swings, but then gradually the other one starts to swing. Pretty soon the second pendulum has picked up all the energy. Then, the process reverses, and pendulum number one picks up the energy. It is exactly the same kind of a thing. The speed at which the energy is swapped back and forth depends on the coupling between the two pendulums—the rate at which the “oscillation” is able to leak across. Also, you remember, with the two pendulums there are two special motions—each with a definite frequency—which we call the fundamental modes. If we pull both pendulums out together, they swing together at one frequency. On the other hand, if we pull one out one way and the other out the other way, there is another stationary mode also at a definite frequency. Well, here we have a similar situation—the ammonia molecule is mathematically like the pair of pendulums. These are the two frequencies—$(E_0-A)/\hbar$ and $(E_0+A)/\hbar$—for when they are oscillating together, or oscillating opposite. The pendulum analogy is not much deeper than the principle that the same equations have the same solutions. The linear equations for the amplitudes (8.39) are very much like the linear equations of harmonic oscillators. (In fact, this is the reason behind the success of our classical theory of the index of refraction, in which we replaced the quantum mechanical atom by a harmonic oscillator, even though, classically, this is not a reasonable view of electrons circulating about a nucleus.) If you pull the nitrogen to one side, then you get a superposition of these two frequencies, and you get a kind of beat note, because the system is not in one or the other states of definite frequency. The splitting of the energy levels of the ammonia molecule is, however, strictly a quantum mechanical effect. The splitting of the energy levels of the ammonia molecule has important practical applications which we will describe in the next chapter. At long last we have an example of a practical physical problem that you can understand with the quantum mechanics! |
|
3 | 9 | The Ammonia Maser | 1 | The states of an ammonia molecule | In this chapter we are going to discuss the application of quantum mechanics to a practical device, the ammonia maser. You may wonder why we stop our formal development of quantum mechanics to do a special problem, but you will find that many of the features of this special problem are quite common in the general theory of quantum mechanics, and you will learn a great deal by considering this one problem in detail. The ammonia maser is a device for generating electromagnetic waves, whose operation is based on the properties of the ammonia molecule which we discussed briefly in the last chapter. We begin by summarizing what we found there. The ammonia molecule has many states, but we are considering it as a two-state system, thinking now only about what happens when the molecule is in any specific state of rotation or translation. A physical model for the two states can be visualized as follows. If the ammonia molecule is considered to be rotating about an axis passing through the nitrogen atom and perpendicular to the plane of the hydrogen atoms, as shown in Fig. 9–1, there are still two possible conditions—the nitrogen may be on one side of the plane of hydrogen atoms or on the other. We call these two states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$. They are taken as a set of base states for our analysis of the behavior of the ammonia molecule. In a system with two base states, any state $\ket{\psi}$ of the system can always be described as a linear combination of the two base states; that is, there is a certain amplitude $C_1$ to be in one base state and an amplitude $C_2$ to be in the other. We can write its state vector as \begin{equation} \label{Eq:III:9:1} \ket{\psi}=\ketsl{\slOne}C_1+ \ketsl{\slTwo}C_2, \end{equation} where \begin{equation} C_1=\braket{\slOne}{\psi}\quad \text{and}\quad C_2=\braket{\slTwo}{\psi}.\notag \end{equation} These two amplitudes change with time according to the Hamiltonian equations, Eq. (8.43). Making use of the symmetry of the two states of the ammonia molecule, we set $H_{11}=H_{22}=E_0$, and $H_{12}=H_{21}=-A$, and get the solution [see Eqs. (8.50) and (8.51)] \begin{align} \label{Eq:III:9:2} C_1&=\frac{a}{2}\,e^{-(i/\hbar)(E_0-A)t}+ \frac{b}{2}\,e^{-(i/\hbar)(E_0+A)t},\\[1ex] \label{Eq:III:9:3} C_2&=\frac{a}{2}\,e^{-(i/\hbar)(E_0-A)t}- \frac{b}{2}\,e^{-(i/\hbar)(E_0+A)t}. \end{align} We want now to take a closer look at these general solutions. Suppose that the molecule was initially put into a state $\ket{\psi_{\slII}}$ for which the coefficient $b$ was equal to zero. Then at $t=0$ the amplitudes to be in the states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$ are identical, and they stay that way for all time. Their phases both vary with time in the same way—with the frequency $(E_0-A)/\hbar$. Similarly, if we were to put the molecule into a state $\ket{\psi_{\slI}}$ for which $a=0$, the amplitude $C_2$ is the negative of $C_1$, and this relationship would stay that way forever. Both amplitudes would now vary with time with the frequency $(E_0+A)/\hbar$. These are the only two possibilities of states for which the relation between $C_1$ and $C_2$ is independent of time. We have found two special solutions in which the two amplitudes do not vary in magnitude and, furthermore, have phases which vary at the same frequencies. These are stationary states as we defined them in Section 7–1, which means that they are states of definite energy. The state $\ket{\psi_{\slII}}$ has the energy $E_{\slII}=E_0-A$, and the state $\ket{\psi_{\slI}}$ has the energy $E_{\slI}=E_0+A$. They are the only two stationary states that exist, so we find that the molecule has two energy levels, with the energy difference $2A$. (We mean, of course, two energy levels for the assumed state of rotation and vibration which we referred to in our initial assumptions.)1 If we hadn’t allowed for the possibility of the nitrogen flipping back and forth, we would have taken $A$ equal to zero and the two energy levels would be on top of each other at energy $E_0$. The actual levels are not this way; their average energy is $E_0$, but they are split apart by $\pm A$, giving a separation of $2A$ between the energies of the two states. Since $A$ is, in fact, very small, the difference in energy is also very small. In order to excite an electron inside an atom, the energies involved are relatively very high—requiring photons in the optical or ultraviolet range. To excite the vibrations of the molecules involves photons in the infrared. If you talk about exciting rotations, the energy differences of the states correspond to photons in the far infrared. But the energy difference $2A$ is lower than any of those and is, in fact, below the infrared and well into the microwave region. Experimentally, it has been found that there is a pair of energy levels with a separation of $10^{-4}$ electron volt—corresponding to a frequency $24{,}000$ megacycles. Evidently this means that $2A=hf$, with $f=24{,}000$ megacycles (corresponding to a wavelength of $1\tfrac{1}{4}$ cm). So here we have a molecule that has a transition which does not emit light in the ordinary sense, but emits microwaves. For the work that follows we need to describe these two states of definite energy a little bit better. Suppose we were to construct an amplitude $C_{\slII}$ by taking the sum of the two numbers $C_1$ and $C_2$: \begin{equation} \label{Eq:III:9:4} C_{\slII}=C_1+C_2=\braket{\slOne}{\phi}+\braket{\slTwo}{\phi}. \end{equation} What would that mean? Well, this is just the amplitude to find the state $\ket{\phi}$ in a new state $\ketsl{\slII}$ in which the amplitudes of the original base states are equal. That is, writing $C_{\slII}=\braket{\slII}{\phi}$, we can abstract the $\ket{\phi}$ away from Eq. (9.4)—because it is true for any $\phi$—and get \begin{equation} \bra{\slII}=\bra{\slOne}+\bra{\slTwo},\notag \end{equation} which means the same as \begin{equation} \label{Eq:III:9:5} \ketsl{\slII}=\ketsl{\slOne}+\ketsl{\slTwo}. \end{equation} The amplitude for the state $\ketsl{\slII}$ to be in the state $\ketsl{\slOne}$ is \begin{equation*} \braketsl{\slOne}{\slII}=\braketsl{\slOne}{\slOne}+ \braketsl{\slOne}{\slTwo}, \end{equation*} which is, of course, just $1$, since $\ketsl{\slOne}$ and $\ketsl{\slTwo}$ are base states. The amplitude for the state $\ketsl{\slII}$ to be in the state $\ketsl{\slTwo}$ is also $1$, so the state $\ketsl{\slII}$ is one which has equal amplitudes to be in the two base states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$. We are, however, in a bit of trouble. The state $\ketsl{\slII}$ has a total probability greater than one of being in some base state or other. That simply means, however, that the state vector is not properly “normalized.” We can take care of that by remembering that we should have $\braketsl{\slII}{\slII}=1$, which must be so for any state. Using the general relation that \begin{equation*} \braket{\chi}{\phi}=\sum_i\braket{\chi}{i}\braket{i}{\phi}, \end{equation*} letting both $\phi$ and $\chi$ be the state $\slII$, and taking the sum over the base states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$, we get that \begin{equation*} \braketsl{\slII}{\slII}=\braketsl{\slII}{\slOne}\braketsl{\slOne}{\slII}+ \braketsl{\slII}{\slTwo}\braketsl{\slTwo}{\slII}. \end{equation*} This will be equal to one as it should if we change our definition of $C_{\slII}$—in Eq. (9.4)—to read \begin{equation*} C_{\slII}=\frac{1}{\sqrt{2}}\,[C_1+C_2]. \end{equation*} In the same way we can construct an amplitude \begin{equation} C_{\slI}=\frac{1}{\sqrt{2}}\,[C_1-C_2],\notag \end{equation} or \begin{equation} \label{Eq:III:9:6} C_{\slI}=\frac{1}{\sqrt{2}}\,[\braket{\slOne}{\phi}-\braket{\slTwo}{\phi}]. \end{equation} This amplitude is the projection of the state $\ket{\phi}$ into a new state $\ketsl{\slI}$ which has opposite amplitudes to be in the states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$. Namely, Eq. (9.6) means the same as \begin{equation} \bra{\slI}=\frac{1}{\sqrt{2}}\,[\bra{\slOne}-\bra{\slTwo}],\notag \end{equation} or \begin{equation} \label{Eq:III:9:7} \ketsl{\slI}=\frac{1}{\sqrt{2}}\,[\ketsl{\slOne}-\ketsl{\slTwo}], \end{equation} from which it follows that \begin{equation} \braketsl{\slOne}{\slI}=\frac{1}{\sqrt{2}}=-\braketsl{\slTwo}{\slI}.\notag \end{equation} Now the reason we have done all this is that the states $\ketsl{\slI}$ and $\ketsl{\slII}$ can be taken as a new set of base states which are especially convenient for describing the stationary states of the ammonia molecule. You remember that the requirement for a set of base states is that \begin{equation*} \braket{i}{j}=\delta_{ij}. \end{equation*} We have already fixed things so that \begin{equation*} \braketsl{\slI}{\slI}=\braketsl{\slII}{\slII}=1. \end{equation*} You can easily show from Eqs. (9.5) and (9.7) that \begin{equation*} \braketsl{\slI}{\slII}=\braketsl{\slII}{\slI}=0. \end{equation*} The amplitudes $C_{\slI}=\braket{\slI}{\phi}$ and $C_{\slII}=\braket{\slII}{\phi}$ for any state $\phi$ to be in our new base states $\ketsl{\slI}$ and $\ketsl{\slII}$ must also satisfy a Hamiltonian equation with the form of Eq. (8.39). In fact, if we just subtract the two equations (9.2) and (9.3) and differentiate with respect to $t$, we see that \begin{equation} \label{Eq:III:9:8} i\hbar\,\ddt{C_{\slI}}{t}=(E_0+A)C_{\slI}=E_{\slI}C_{\slI}. \end{equation} And taking the sum of Eqs. (9.2) and (9.3), we see that \begin{equation} \label{Eq:III:9:9} i\hbar\,\ddt{C_{\slII}}{t}=(E_0-A)C_{\slII}=E_{\slII}C_{\slII}. \end{equation} Using $\ketsl{\slI}$ and $\ketsl{\slII}$ for base states, the Hamiltonian matrix has the simple form \begin{alignat*}{2} H_{\slI,\slI}&=E_{\slI},&\quad H_{\slI,\slII}&=0,\\[1ex] H_{\slII,\slI}&=0,&\quad H_{\slII,\slII}&=E_{\slII}. \end{alignat*} Note that each of the Eqs. (9.8) and (9.9) look just like what we had in Section 8–6 for the equation of a one-state system. They have a simple exponential time dependence corresponding to a single energy. As time goes on, the amplitudes to be in each state act independently. The two stationary states $\ket{\psi_{\slI}}$ and $\ket{\psi_{\slII}}$ we found above are, of course, solutions of Eqs. (9.8) and (9.9). The state $\ket{\psi_{\slI}}$ (for which $C_1=-C_2$) has \begin{equation} \label{Eq:III:9:10} C_{\slI}=e^{-(i/\hbar)(E_0+A)t},\quad C_{\slII}=0. \end{equation} And the state $\ket{\psi_{\slII}}$ (for which $C_1=C_2$) has \begin{equation} \label{Eq:III:9:11} C_{\slI}=0,\quad C_{\slII}=e^{-(i/\hbar)(E_0-A)t}. \end{equation} Remember that the amplitudes in Eq. (9.10) are \begin{equation*} C_{\slI}=\braket{\slI}{\psi_{\slI}},\quad \text{and}\quad C_{\slII}=\braket{\slII}{\psi_{\slI}}; \end{equation*} so Eq. (9.10) means the same thing as \begin{equation*} \ket{\psi_{\slI}}=\ketsl{\slI}\,e^{-(i/\hbar)(E_0+A)t}. \end{equation*} That is, the state vector of the stationary state $\ket{\psi_{\slI}}$ is the same as the state vector of the base state $\ketsl{\slI}$ except for the exponential factor appropriate to the energy of the state. In fact at $t=0$ \begin{equation*} \ket{\psi_{\slI}}=\ketsl{\slI}; \end{equation*} the state $\ketsl{\slI}$ has the same physical configuration as the stationary state of energy $E_0+A$. In the same way, we have for the second stationary state that \begin{equation*} \ket{\psi_{\slII}}=\ketsl{\slII}\,e^{-(i/\hbar)(E_0-A)t}. \end{equation*} The state $\ketsl{\slII}$ is just the stationary state of energy $E_0-A$ at $t=0$. Thus our two new base states $\ketsl{\slI}$ and $\ketsl{\slII}$ have physically the form of the states of definite energy, with the exponential time factor taken out so that they can be time-independent base states. (In what follows we will find it convenient not to have to distinguish always between the stationary states $\ket{\psi_{\slI}}$ and $\ket{\psi_{\slII}}$ and their base states $\ketsl{\slI}$ and $\ketsl{\slII}$, since they differ only by the obvious time factors.) In summary, the state vectors $\ketsl{\slI}$ and $\ketsl{\slII}$ are a pair of base vectors which are appropriate for describing the definite energy states of the ammonia molecule. They are related to our original base vectors by \begin{equation} \label{Eq:III:9:12} \ketsl{\slI}=\frac{1}{\sqrt{2}}\,[\ketsl{\slOne}-\ketsl{\slTwo}],\quad \ketsl{\slII}=\frac{1}{\sqrt{2}}\,[\ketsl{\slOne}+\ketsl{\slTwo}]. \end{equation} The amplitudes to be in $\ketsl{\slI}$ and $\ketsl{\slII}$ are related to $C_1$ and $C_2$ by \begin{equation} \label{Eq:III:9:13} C_{\slI}=\frac{1}{\sqrt{2}}\,[C_1-C_2],\quad C_{\slII}=\frac{1}{\sqrt{2}}\,[C_1+C_2]. \end{equation} Any state at all can be represented by a linear combination of $\ketsl{\slOne}$ and $\ketsl{\slTwo}$—with the coefficients $C_1$ and $C_2$—or by a linear combination of the definite energy base states $\ketsl{\slI}$ and $\ketsl{\slII}$—with the coefficients $C_{\slI}$ and $C_{\slII}$. Thus, \begin{equation*} \ket{\phi} =\ketsl{\slOne}C_1+\ketsl{\slTwo}C_2 \end{equation*} or \begin{equation*} \;\ket{\phi} =\ketsl{\slI}C_{\slI}\,+\ketsl{\slII}C_{\slII}. \end{equation*} The second form gives us the amplitudes for finding the state $\ket{\phi}$ in a state with the energy $E_{\slI}=E_0+A$ or in a state with the energy $E_{\slII}=E_0-A$. |
|
3 | 9 | The Ammonia Maser | 2 | The molecule in a static electric field | If the ammonia molecule is in either of the two states of definite energy and we disturb it at a frequency $\omega$ such that $\hbar\omega=$ $E_{\slI}-E_{\slII}=$ $2A$, the system may make a transition from one state to the other. Or, if it is in the upper state, it may change to the lower state and emit a photon. But in order to induce such transitions you must have a physical connection to the states—some way of disturbing the system. There must be some external machinery for affecting the states, such as magnetic or electric fields. In this particular case, these states are sensitive to an electric field. We will, therefore, look next at the problem of the behavior of the ammonia molecule in an external electric field. To discuss the behavior in an electric field, we will go back to the original base system $\ketsl{\slOne}$ and $\ketsl{\slTwo}$, rather than using $\ketsl{\slI}$ and $\ketsl{\slII}$. Suppose that there is an electric field in a direction perpendicular to the plane of the hydrogen atoms. Disregarding for the moment the possibility of flipping back and forth, would it be true that the energy of this molecule is the same for the two positions of the nitrogen atom? Generally, no. The electrons tend to lie closer to the nitrogen than to the hydrogen nuclei, so the hydrogens are slightly positive. The actual amount depends on the details of electron distribution. It is a complicated problem to figure out exactly what this distribution is, but in any case the net result is that the ammonia molecule has an electric dipole moment, as indicated in Fig. 9–1. We can continue our analysis without knowing in detail the direction or amount of displacement of the charge. However, to be consistent with the notation of others, let’s suppose that the electric dipole moment is $\FLPmu$, with its direction pointing from the nitrogen atom and perpendicular to the plane of the hydrogen atoms. Now, when the nitrogen flips from one side to the other, the center of mass will not move, but the electric dipole moment will flip over. As a result of this moment, the energy in an electric field $\Efieldvec$ will depend on the molecular orientation.2 With the assumption made above, the potential energy will be higher if the nitrogen atom points in the direction of the field, and lower if it is in the opposite direction; the separation in the two energies will be $2\mu\Efield$. In the discussion up to this point, we have assumed values of $E_0$ and $A$ without knowing how to calculate them. According to the correct physical theory, it should be possible to calculate these constants in terms of the positions and motions of all the nuclei and electrons. But nobody has ever done it. Such a system involves ten electrons and four nuclei and that’s just too complicated a problem. As a matter of fact, there is no one who knows much more about this molecule than we do. All anyone can say is that when there is an electric field, the energy of the two states is different, the difference being proportional to the electric field. We have called the coefficient of proportionality $2\mu$, but its value must be determined experimentally. We can also say that the molecule has the amplitude $A$ to flip over, but this will have to be measured experimentally. Nobody can give us accurate theoretical values of $\mu$ and $A$, because the calculations are too complicated to do in detail. For the ammonia molecule in an electric field, our description must be changed. If we ignored the amplitude for the molecule to flip from one configuration to the other, we would expect the energies of the two states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$ to be $(E_0\pm\mu\Efield)$. Following the procedure of the last chapter, we take \begin{equation} \label{Eq:III:9:14} H_{11}=E_0+\mu\Efield,\quad H_{22}=E_0-\mu\Efield. \end{equation} Also we will assume that for the electric fields of interest the field does not affect appreciably the geometry of the molecule and, therefore, does not affect the amplitude that the nitrogen will jump from one position to the other. We can then take that $H_{12}$ and $H_{21}$ are not changed; so \begin{equation} \label{Eq:III:9:15} H_{12}=H_{21}=-A. \end{equation} We must now solve the Hamiltonian equations, Eq. (8.43), with these new values of $H_{ij}$. We could solve them just as we did before, but since we are going to have several occasions to want the solutions for two-state systems, let’s solve the equations once and for all in the general case of arbitrary $H_{ij}$—assuming only that they do not change with time. We want the general solution of the pair of Hamiltonian equations \begin{align} \label{Eq:III:9:16} i\hbar\,\ddt{C_1}{t}&=H_{11}C_1+H_{12}C_2,\\[1ex] \label{Eq:III:9:17} i\hbar\,\ddt{C_2}{t}&=H_{21}C_1+H_{22}C_2. \end{align} Since these are linear differential equations with constant coefficients, we can always find solutions which are exponential functions of the dependent variable $t$. We will first look for a solution in which $C_1$ and $C_2$ both have the same time dependence; we can use the trial functions \begin{equation*} C_1=a_1e^{-i\omega t},\quad C_2=a_2e^{-i\omega t}. \end{equation*} Since such a solution corresponds to a state of energy $E=\hbar\omega$, we may as well write right away \begin{align} \label{Eq:III:9:18} C_1&=a_1e^{-(i/\hbar)Et},\\[1ex] \label{Eq:III:9:19} C_2&=a_2e^{-(i/\hbar)Et}, \end{align} where $E$ is as yet unknown and to be determined so that the differential equations (9.16) and (9.17) are satisfied. When we substitute $C_1$ and $C_2$ from (9.18) and (9.19) in the differential equations (9.16) and (9.17), the derivatives give us just $-iE/\hbar$ times $C_1$ or $C_2$, so the left sides become just $EC_1$ and $EC_2$. Cancelling the common exponential factors, we get \begin{equation*} Ea_1=H_{11}a_1+H_{12}a_2,\quad Ea_2=H_{21}a_1+H_{22}a_2. \end{equation*} Or, rearranging the terms, we have \begin{align} \label{Eq:III:9:20} (E-H_{11})a_1-H_{12}a_2&=0,\\[1ex] \label{Eq:III:9:21} -H_{21}a_1+(E-H_{22})a_2&=0. \end{align} With such a set of homogeneous algebraic equations, there will be nonzero solutions for $a_1$ and $a_2$ only if the determinant of the coefficients of $a_1$ and $a_2$ is zero, that is, if \begin{equation} \label{Eq:III:9:22} \Det\begin{pmatrix} E-H_{11} & \phantom{E}-H_{12}\\[1ex] \phantom{E}-H_{21} & E-H_{22} \end{pmatrix}=0. \end{equation} However, when there are only two equations and two unknowns, we don’t need such a sophisticated idea. The two equations (9.20) and (9.21) each give a ratio for the two coefficients $a_1$ and $a_2$, and these two ratios must be equal. From (9.20) we have that \begin{equation} \label{Eq:III:9:23} \frac{a_1}{a_2} =\frac{H_{12}}{E-H_{11}}, \end{equation} and from (9.21) that \begin{equation} \label{Eq:III:9:24} \frac{a_1}{a_2} =\frac{E-H_{22}}{H_{21}}. \end{equation} Equating these two ratios, we get that $E$ must satisfy \begin{equation*} (E-H_{11})(E-H_{22})-H_{12}H_{21}=0. \end{equation*} This is the same result we would get by solving Eq. (9.22). Either way, we have a quadratic equation for $E$ which has two solutions: \begin{equation} \label{Eq:III:9:25} E=\!\frac{H_{11}\!+\!H_{22}}{2}\!\pm\! \sqrt{\frac{(H_{11}\!-\!H_{22})^2}{4}\!+\!H_{12}H_{21}}. \end{equation} There are two possible values for the energy $E$. Note that both solutions give real numbers for the energy, because $H_{11}$ and $H_{22}$ are real, and $H_{12}H_{21}$ is equal to $H_{12}H_{12}\cconj=\abs{H_{12}}^2$, which is both real and positive. Using the same convention we took before, we will call the upper energy $E_{\slI}$ and the lower energy $E_{\slII}$. We have \begin{align} \label{Eq:III:9:26} E_{\slI}&=\!\frac{H_{11}\!+\!H_{22}}{2}\!+\! \sqrt{\frac{(H_{11}\!-\!H_{22})^2}{4}\!+\!H_{12}H_{21}},\\[1.5ex] \label{Eq:III:9:27} E_{\slII}&=\!\frac{H_{11}\!+\!H_{22}}{2}\!-\! \sqrt{\frac{(H_{11}\!-\!H_{22})^2}{4}\!+\!H_{12}H_{21}}. \end{align} Using each of these two energies separately in Eqs. (9.18) and (9.19), we have the amplitudes for the two stationary states (the states of definite energy). If there are no external disturbances, a system initially in one of these states will stay that way forever—only its phase changes. We can check our results for two special cases. If $H_{12}=H_{21}=0$, we have that $E_{\slI}=H_{11}$ and $E_{\slII}=H_{22}$. This is certainly correct, because then Eqs. (9.16) and (9.17) are uncoupled, and each represents a state of energy $H_{11}$ and $H_{22}$. Next, if we set $H_{11}=H_{22}=E_0$ and $H_{21}=H_{12}=-A$, we get the solution we found before: \begin{equation*} E_{\slI}=E_0+A\quad \text{and}\quad E_{\slII}=E_0-A. \end{equation*} For the general case, the two solutions $E_{\slI}$ and $E_{\slII}$ refer to two states—which we can again call the states \begin{equation*} \ket{\psi_{\slI}}=\ketsl{\slI}e^{-(i/\hbar)E_{\slI}t}\quad \text{and}\quad \ket{\psi_{\slII}}=\ketsl{\slII}e^{-(i/\hbar)E_{\slII}t}. \end{equation*} These states will have $C_1$ and $C_2$ as given in Eqs. (9.18) and (9.19), where $a_1$ and $a_2$ are still to be determined. Their ratio is given by either Eq. (9.23) or Eq. (9.24). They must also satisfy one more condition. If the system is known to be in one of the stationary states, the sum of the probabilities that it will be found in $\ketsl{\slOne}$ or $\ketsl{\slTwo}$ must equal one. We must have that \begin{equation} \label{Eq:III:9:28} \abs{C_1}^2+\abs{C_2}^2=1, \end{equation} or, equivalently, \begin{equation} \label{Eq:III:9:29} \abs{a_1}^2+\abs{a_2}^2=1. \end{equation} These conditions do not uniquely specify $a_1$ and $a_2$; they are still undetermined by an arbitrary phase—in other words, by a factor like $e^{i\delta}$. Although general solutions for the $a$’s can be written down,3 it is usually more convenient to work them out for each special case. Let’s go back now to our particular example of the ammonia molecule in an electric field. Using the values for $H_{11}$, $H_{22}$, and $H_{12}$ given in (9.14) and (9.15), we get for the energies of the two stationary states \begin{equation} \label{Eq:III:9:30} E_{\slI}=E_0+\sqrt{A^2+\mu^2\Efield^2},\quad E_{\slII}=E_0-\sqrt{A^2+\mu^2\Efield^2}. \end{equation}
\begin{equation} \begin{aligned} E_{\slI}&=E_0+\sqrt{A^2+\mu^2\Efield^2},\\[1ex] E_{\slII}&=E_0-\sqrt{A^2+\mu^2\Efield^2}. \end{aligned} \label{Eq:III:9:30} \end{equation} These two energies are plotted as a function of the electric field strength $\Efield$ in Fig. 9–2. When the electric field is zero, the two energies are, of course, just $E_0\pm A$. When an electric field is applied, the splitting between the two levels increases. The splitting increases at first slowly with $\Efield$, but eventually becomes proportional to $\Efield$. (The curve is a hyperbola.) For enormously strong fields, the energies are just \begin{equation} \label{Eq:III:9:31} E_{\slI}=E_0+\mu\Efield=H_{11},\quad E_{\slII}=E_0-\mu\Efield=H_{22}. \end{equation}
\begin{equation} \begin{aligned} E_{\slI}&=E_0+\mu\Efield=H_{11},\\[1ex] E_{\slII}&=E_0-\mu\Efield=H_{22}. \end{aligned} \label{Eq:III:9:31} \end{equation}
The fact that there is an amplitude for the nitrogen to flip back and forth has little effect when the two positions have very different energies. This is an interesting point which we will come back to again later. We are at last ready to understand the operation of the ammonia maser. The idea is the following. First, we find a way of separating molecules in the state $\ketsl{\slI}$ from those in the state $\ketsl{\slII}$.4 Then the molecules in the higher energy state $\ketsl{\slI}$ are passed through a cavity which has a resonant frequency of $24{,}000$ megacycles. The molecules can deliver energy to the cavity—in a way we will discuss later—and leave the cavity in the state $\ketsl{\slII}$. Each molecule that makes such a transition will deliver the energy $E=E_{\slI}-E_{\slII}$ to the cavity. The energy from the molecules will appear as electrical energy in the cavity. How can we separate the two molecular states? One method is as follows. The ammonia gas is let out of a little jet and passed through a pair of slits to give a narrow beam, as shown in Fig. 9–3. The beam is then sent through a region in which there is a large transverse electric field. The electrodes to produce the field are shaped so that the electric field varies rapidly across the beam. Then the square of the electric field $\Efieldvec\cdot\Efieldvec$ will have a large gradient perpendicular to the beam. Now a molecule in state $\ketsl{\slI}$ has an energy which increases with $\Efield^2$, and therefore this part of the beam will be deflected toward the region of lower $\Efield^2$. A molecule in state $\ketsl{\slII}$ will, on the other hand, be deflected toward the region of larger $\Efield^2$, since its energy decreases as $\Efield^2$ increases. Incidentally, with the electric fields which can be generated in the laboratory, the energy $\mu\Efield$ is always much smaller than $A$. In such cases, the square root in Eqs. (9.30) can be approximated by \begin{equation} \label{Eq:III:9:32} A\biggl( 1+\frac{1}{2}\,\frac{\mu^2\Efield^2}{A^2} \biggr). \end{equation} So the energy levels are, for all practical purposes, \begin{equation} \label{Eq:III:9:33} E_{\slI} =E_0+A+\frac{\mu^2\Efield^2}{2A} \end{equation} and \begin{equation} \label{Eq:III:9:34} E_{\slII} =E_0-A-\frac{\mu^2\Efield^2}{2A}. \end{equation} And the energies vary approximately linearly with $\Efield^2$. The force on the molecules is then \begin{equation} \label{Eq:III:9:35} \FLPF=\frac{\mu^2}{2A}\,\FLPgrad{\Efield^2}. \end{equation} Many molecules have an energy in an electric field which is proportional to $\Efield^2$. The coefficient is the polarizability of the molecule. Ammonia has an unusually high polarizability because of the small value of $A$ in the denominator. Thus, ammonia molecules are unusually sensitive to an electric field. (What would you expect for the dielectric coefficient of NH$_3$ gas?) |
|
3 | 9 | The Ammonia Maser | 3 | Transitions in a time-dependent field | In the ammonia maser, the beam with molecules in the state $\ketsl{\slI}$ and with the energy $E_{\slI}$ is sent through a resonant cavity, as shown in Fig. 9–4. The other beam is discarded. Inside the cavity, there will be a time-varying electric field, so the next problem we must discuss is the behavior of a molecule in an electric field that varies with time. We have a completely different kind of a problem—one with a time-varying Hamiltonian. Since $H_{ij}$ depends upon $\Efield$, the $H_{ij}$ vary with time, and we must determine the behavior of the system in this circumstance. To begin with, we write down the equations to be solved: \begin{equation} \begin{aligned} i\hbar\,\ddt{C_1}{t}&=(E_0+\mu\Efield)C_1-AC_2,\\[1ex] i\hbar\,\ddt{C_2}{t}&=-AC_1+(E_0-\mu\Efield)C_2. \end{aligned} \label{Eq:III:9:36} \end{equation} To be definite, let’s suppose that the electric field varies sinusoidally; then we can write \begin{equation} \label{Eq:III:9:37} \Efield=2\Efield_0\cos\omega t= \Efield_0(e^{i\omega t}+e^{-i\omega t}). \end{equation} In actual operation the frequency $\omega$ will be very nearly equal to the resonant frequency of the molecular transition $\omega_0=2A/\hbar$, but for the time being we want to keep things general, so we’ll let it have any value at all. The best way to solve our equations is to form linear combinations of $C_1$ and $C_2$ as we did before. So we add the two equations, divide by the square root of $2$, and use the definitions of $C_{\slI}$ and $C_{\slII}$ that we had in Eq. (9.13). We get \begin{equation} \label{Eq:III:9:38} i\hbar\,\ddt{C_{\slII}}{t}= (E_0-A)C_{\slII}+\mu\Efield C_{\slI}. \end{equation} You’ll note that this is the same as Eq. (9.9) with an extra term due to the electric field. Similarly, if we subtract the two equations (9.36), we get \begin{equation} \label{Eq:III:9:39} i\hbar\,\ddt{C_{\slI}}{t}= (E_0+A)C_{\slI}+\mu\Efield C_{\slII}. \end{equation} Now the question is, how to solve these equations? They are more difficult than our earlier set, because $\Efield$ depends on $t$; and, in fact, for a general $\Efield(t)$ the solution is not expressible in elementary functions. However, we can get a good approximation so long as the electric field is small. First we will write \begin{equation} \begin{gathered} C_{\slI}=\gamma_{\slI}e^{-i(E_0+A)t/\hbar}= \gamma_{\slI}e^{-i(E_{\slI})t/\hbar},\\ \\ C_{\slII}=\gamma_{\slII}e^{-i(E_0-A)t/\hbar}= \gamma_{\slII}e^{-i(E_{\slII})t/\hbar}. \end{gathered} \label{Eq:III:9:40} \end{equation} If there were no electric field, these solutions would be correct with $\gamma_{\slI}$ and $\gamma_{\slII}$ just chosen as two complex constants. In fact, since the probability of being in state $\ketsl{\slI}$ is the absolute square of $C_{\slI}$ and the probability of being in state $\ketsl{\slII}$ is the absolute square of $C_{\slII}$, the probability of being in state $\ketsl{\slI}$ or in state $\ketsl{\slII}$ is just $\abs{\gamma_{\slI}}^2$ or $\abs{\gamma_{\slII}}^2$. For instance, if the system were to start originally in state $\ketsl{\slII}$ so that $\gamma_{\slI}$ was zero and $\abs{\gamma_{\slII}}^2$ was one, this condition would go on forever. There would be no chance, if the molecule were originally in state $\ketsl{\slII}$, ever to get into state $\ketsl{\slI}$. Now the idea of writing our equations in the form of Eq. (9.40) is that if $\mu\Efield$ is small in comparison with $A$, the solutions can still be written in this way, but then $\gamma_{\slI}$ and $\gamma_{\slII}$ become slowly varying functions of time—where by “slowly varying” we mean slowly in comparison with the exponential functions. That is the trick. We use the fact that $\gamma_{\slI}$ and $\gamma_{\slII}$ vary slowly to get an approximate solution. We want now to substitute $C_{\slI}$ from Eq. (9.40) in the differential equation (9.39), but we must remember that $\gamma_{\slI}$ is also a function of $t$. We have \begin{equation*} i\hbar\,\ddt{C_{\slI}}{t}= E_{\slI}\gamma_{\slI}e^{-iE_{\slI}t/\hbar}+ i\hbar\,\ddt{\gamma_{\slI}}{t}\,e^{-iE_{\slI}t/\hbar}. \end{equation*} The differential equation becomes \begin{equation} \label{Eq:III:9:41} \biggl(E_{\slI}\gamma_{\slI}+i\hbar\,\ddt{\gamma_{\slI}}{t}\biggr) e^{-(i/\hbar)E_{\slI}t}= E_{\slI}\gamma_{\slI}e^{-(i/\hbar)E_{\slI}t}+ \mu\Efield\gamma_{\slII}e^{-(i/\hbar)E_{\slII}t}. \end{equation}
\begin{equation} \begin{gathered} \biggl(E_{\slI}\gamma_{\slI}+i\hbar\,\ddt{\gamma_{\slI}}{t}\biggr) e^{-(i/\hbar)E_{\slI}t}=\\[1ex] E_{\slI}\gamma_{\slI}e^{-(i/\hbar)E_{\slI}t}+ \mu\Efield\gamma_{\slII}e^{-(i/\hbar)E_{\slII}t}. \end{gathered} \label{Eq:III:9:41} \end{equation} Similarly, the equation in $dC_{\slII}/dt$ becomes \begin{equation} \label{Eq:III:9:42} \biggl(E_{\slII}\gamma_{\slII}+i\hbar\,\ddt{\gamma_{\slII}}{t}\biggr) e^{-(i/\hbar)E_{\slII}t}= E_{\slII}\gamma_{\slII}e^{-(i/\hbar)E_{\slII}t}+ \mu\Efield\gamma_{\slI}e^{-(i/\hbar)E_{\slI}t}. \end{equation}
\begin{equation} \begin{gathered} \biggl(E_{\slII}\gamma_{\slII}+i\hbar\,\ddt{\gamma_{\slII}}{t}\biggr) e^{-(i/\hbar)E_{\slII}t}=\\[1ex] E_{\slII}\gamma_{\slII}e^{-(i/\hbar)E_{\slII}t}+ \mu\Efield\gamma_{\slI}e^{-(i/\hbar)E_{\slI}t}. \end{gathered} \label{Eq:III:9:42} \end{equation} Now you will notice that we have equal terms on both sides of each equation. We cancel these terms, and we also multiply the first equation by $e^{+iE_{\slI}t/\hbar}$ and the second by $e^{+iE_{\slII}t/\hbar}$. Remembering that $(E_{\slI}-E_{\slII})=$ $2A=$ $\hbar\omega_0$, we have finally, \begin{equation} \label{Eq:III:9:43} \begin{aligned} i\hbar\,\ddt{\gamma_{\slI}}{t}&= \mu\Efield(t)e^{i\omega_0t}\gamma_{\slII},\\[1ex] i\hbar\,\ddt{\gamma_{\slII}}{t}&= \mu\Efield(t)e^{-i\omega_0t}\gamma_{\slI}. \end{aligned} \end{equation} Now we have an apparently simple pair of equations—and they are still exact, of course. The derivative of one variable is a function of time $\mu\Efield(t)e^{i\omega_0t}$, multiplied by the second variable; the derivative of the second is a similar time function, multiplied by the first. Although these simple equations cannot be solved in general, we will solve them for some special cases. We are, for the moment at least, interested only in the case of an oscillating electric field. Taking $\Efield(t)$ as given in Eq. (9.37), we find that the equations for $\gamma_{\slI}$ and $\gamma_{\slII}$ become \begin{equation} \label{Eq:III:9:44} \begin{aligned} i\hbar\,\ddt{\gamma_{\slI}}{t}&= \mu\Efield_0[e^{i(\omega+\omega_0)t}+ e^{-i(\omega-\omega_0)t}]\gamma_{\slII},\\[1ex] i\hbar\,\ddt{\gamma_{\slII}}{t}&= \mu\Efield_0[e^{i(\omega-\omega_0)t}+ e^{-i(\omega+\omega_0)t}]\gamma_{\slI}. \end{aligned} \end{equation} Now if $\Efield_0$ is sufficiently small, the rates of change of $\gamma_{\slI}$ and $\gamma_{\slII}$ are also small. The two $\gamma$’s will not vary much with $t$, especially in comparison with the rapid variations due to the exponential terms. These exponential terms have real and imaginary parts that oscillate at the frequency $\omega+\omega_0$ or $\omega-\omega_0$. The terms with $\omega+\omega_0$ oscillate very rapidly about an average value of zero and, therefore, do not contribute very much on the average to the rate of change of $\gamma$. So we can make a reasonably good approximation by replacing these terms by their average value, namely, zero. We will just leave them out, and take as our approximation: \begin{equation} \begin{aligned} i\hbar\,\ddt{\gamma_{\slI}}{t}&= \mu\Efield_0e^{-i(\omega-\omega_0)t}\gamma_{\slII},\\[1ex] i\hbar\,\ddt{\gamma_{\slII}}{t}&= \mu\Efield_0e^{i(\omega-\omega_0)t}\gamma_{\slI}. \end{aligned} \label{Eq:III:9:45} \end{equation} Even the remaining terms, with exponents proportional to $(\omega-\omega_0)$, will also vary rapidly unless $\omega$ is near $\omega_0$. Only then will the right-hand side vary slowly enough that any appreciable amount will accumulate when we integrate the equations with respect to $t$. In other words, with a weak electric field the only significant frequencies are those near $\omega_0$. With the approximation made in getting Eq. (9.45), the equations can be solved exactly, but the work is a little elaborate, so we won’t do that until later when we take up another problem of the same type. Now we’ll just solve them approximately—or rather, we’ll find an exact solution for the case of perfect resonance, $\omega=\omega_0$, and an approximate solution for frequencies near resonance. |
|
3 | 9 | The Ammonia Maser | 4 | Transitions at resonance | Let’s take the case of perfect resonance first. If we take $\omega=\omega_0$, the exponentials are equal to one in both equations of (9.45), and we have just \begin{equation} \label{Eq:III:9:46} \ddt{\gamma_{\slI}}{t}=-\frac{i\mu\Efield_0}{\hbar}\,\gamma_{\slII},\quad \ddt{\gamma_{\slII}}{t}=-\frac{i\mu\Efield_0}{\hbar}\,\gamma_{\slI}. \end{equation} If we eliminate first $\gamma_{\slI}$ and then $\gamma_{\slII}$ from these equations, we find that each satisfies the differential equation of simple harmonic motion: \begin{equation} \label{Eq:III:9:47} \frac{d^2\gamma}{dt^2}=-\biggl( \frac{\mu\Efield_0}{\hbar} \biggr)^2\gamma. \end{equation} The general solutions for these equations can be made up of sines and cosines. As you can easily verify, the following equations are a solution: \begin{equation} \begin{aligned} \gamma_{\slI}&=a\cos\biggl(\frac{\mu\Efield_0}{\hbar}\biggr)t+ b\sin\biggl(\frac{\mu\Efield_0}{\hbar}\biggr)t,\\[1.5ex] \gamma_{\slII}&=ib\cos\biggl(\frac{\mu\Efield_0}{\hbar}\biggr)t- ia\sin\biggl(\frac{\mu\Efield_0}{\hbar}\biggr)t, \end{aligned} \label{Eq:III:9:48} \end{equation} where $a$ and $b$ are constants to be determined to fit any particular physical situation. For instance, suppose that at $t=0$ our molecular system was in the upper energy state $\ketsl{\slI}$, which would require—from Eq. (9.40)—that $\gamma_{\slI}=1$ and $\gamma_{\slII}=0$ at $t=0$. For this situation we would need $a=1$ and $b=0$. The probability that the molecule is in the state $\ketsl{\slI}$ at some later $t$ is the absolute square of $\gamma_{\slI}$, or \begin{equation} \label{Eq:III:9:49} P_{\slI}=\abs{\gamma_{\slI}}^2= \cos^2\biggl(\frac{\mu\Efield_0}{\hbar}\biggr)t. \end{equation} Similarly, the probability that the molecule will be in the state $\ketsl{\slII}$ is given by the absolute square of $\gamma_{\slII}$, \begin{equation} \label{Eq:III:9:50} P_{\slII}=\abs{\gamma_{\slII}}^2= \sin^2\biggl(\frac{\mu\Efield_0}{\hbar}\biggr)t. \end{equation} So long as $\Efield$ is small and we are on resonance, the probabilities are given by simple oscillating functions. The probability to be in state $\ketsl{\slI}$ falls from one to zero and back again, while the probability to be in the state $\ketsl{\slII}$ rises from zero to one and back. The time variation of the two probabilities is shown in Fig. 9–5. Needless to say, the sum of the two probabilities is always equal to one; the molecule is always in some state! Let’s suppose that it takes the molecule the time $T$ to go through the cavity. If we make the cavity just long enough so that $\mu\Efield_0T/\hbar=\pi/2$, then a molecule which enters in state $\ketsl{\slI}$ will certainly leave it in state $\ketsl{\slII}$. If it enters the cavity in the upper state, it will leave the cavity in the lower state. In other words, its energy is decreased, and the loss of energy can’t go anywhere else but into the machinery which generates the field. The details by which you can see how the energy of the molecule is fed into the oscillations of the cavity are not simple; however, we don’t need to study these details, because we can use the principle of conservation of energy. (We could study them if we had to, but then we would have to deal with the quantum mechanics of the field in the cavity in addition to the quantum mechanics of the atom.) In summary: the molecule enters the cavity, the cavity field—oscillating at exactly the right frequency—induces transitions from the upper to the lower state, and the energy released is fed into the oscillating field. In an operating maser the molecules deliver enough energy to maintain the cavity oscillations—not only providing enough power to make up for the cavity losses but even providing small amounts of excess power that can be drawn from the cavity. Thus, the molecular energy is converted into the energy of an external electromagnetic field. Remember that before the beam enters the cavity, we have to use a filter which separates the beam so that only the upper state enters. It is easy to demonstrate that if you were to start with molecules in the lower state, the process will go the other way and take energy out of the cavity. If you put the unfiltered beam in, as many molecules are taking energy out as are putting energy in, so nothing much would happen. In actual operation it isn’t necessary, of course, to make $(\mu\Efield_0T/\hbar)$ exactly $\pi/2$. For any other value (except an exact integral multiple of $\pi$), there is some probability for transitions from state $\ketsl{\slI}$ to state $\ketsl{\slII}$. For other values, however, the device isn’t $100$ percent efficient; many of the molecules which leave the cavity could have delivered some energy to the cavity but didn’t. In actual use, the velocity of all the molecules is not the same; they have some kind of Maxwell distribution. This means that the ideal periods of time for different molecules will be different, and it is impossible to get $100$ percent efficiency for all the molecules at once. In addition, there is another complication which is easy to take into account, but we don’t want to bother with it at this stage. You remember that the electric field in a cavity usually varies from place to place across the cavity. Thus, as the molecules drift across the cavity, the electric field at the molecule varies in a way that is more complicated than the simple sinusoidal oscillation in time that we have assumed. Clearly, one would have to use a more complicated integration to do the problem exactly, but the general idea is still the same. There are other ways of making masers. Instead of separating the atoms in state $\ketsl{\slI}$ from those in state $\ketsl{\slII}$ by a Stern-Gerlach apparatus, one can have the atoms already in the cavity (as a gas or a solid) and shift atoms from state $\ketsl{\slII}$ to state $\ketsl{\slI}$ by some means. One way is one used in the so-called three-state maser. For it, atomic systems are used which have three energy levels, as shown in Fig. 9–6, with the following special properties. The system will absorb radiation (say, light) of frequency $\hbar\omega_1$ and go from the lowest energy level $E_{\slII}$, to some high-energy level $E'$, and then will quickly emit photons of frequency $\hbar\omega_2$ and go to the state $\ketsl{\slI}$ with energy $E_{\slI}$. The state $\ketsl{\slI}$ has a long lifetime so its population can be raised, and the conditions are then appropriate for maser operation between states $\ketsl{\slI}$ and $\ketsl{\slII}$. Although such a device is called a “three-state” maser, the maser operation really works just as a two-state system such as we are describing. A laser (Light Amplification by Stimulated Emission of Radiation) is just a maser working at optical frequencies. The “cavity” for a laser usually consists of just two plane mirrors between which standing waves are generated. |
|
3 | 9 | The Ammonia Maser | 5 | Transitions off resonance | Finally, we would like to find out how the states vary in the circumstance that the cavity frequency is nearly, but not exactly, equal to $\omega_0$. We could solve this problem exactly, but instead of trying to do that, we’ll take the important case that the electric field is small and also the period of time $T$ is small, so that $\mu\Efield_0T/\hbar$ is much less than one. Then, even in the case of perfect resonance which we have just worked out, the probability of making a transition is small. Suppose that we start again with $\gamma_{\slI}=1$ and $\gamma_{\slII}=0$. During the time $T$ we would expect $\gamma_{\slI}$ to remain nearly equal to one, and $\gamma_{\slII}$ to remain very small compared with unity. Then the problem is very easy. We can calculate $\gamma_{\slII}$ from the second equation in (9.45), taking $\gamma_{\slI}$ equal to one and integrating from $t=0$ to $t=T$. We get \begin{equation} \label{Eq:III:9:51} \gamma_{\slII}=\frac{\mu\Efield_0}{\hbar}\,\biggl[ \frac{1-e^{i(\omega-\omega_0)T}}{\omega-\omega_0} \biggr]. \end{equation} This $\gamma_{\slII}$, used with Eq. (9.40), gives the amplitude to have made a transition from the state $\ketsl{\slI}$ to the state $\ketsl{\slII}$ during the time interval $T$. The probability $P(\slI\to\slII\,)$ to make the transition is $\abs{\gamma_{\slII}}^2$, or \begin{equation} \label{Eq:III:9:52} P(\slI\to\slII\,)=\abs{\gamma_{\slII}}^2= \biggl[\frac{\mu\Efield_0T}{\hbar}\biggr]^2 \,\frac{\sin^2[(\omega-\omega_0)T/2]} {[(\omega-\omega_0)T/2]^2}. \end{equation}
\begin{gather} P(\slI\to\slII\,)=\abs{\gamma_{\slII}}^2\notag\\[1ex] \label{Eq:III:9:52} =\biggl[\frac{\mu\Efield_0T}{\hbar}\biggr]^2 \,\frac{\sin^2[(\omega-\omega_0)T/2]} {[(\omega-\omega_0)T/2]^2}. \end{gather}
It is interesting to plot this probability for a fixed length of time as a function of the frequency of the cavity in order to see how sensitive it is to frequencies near the resonant frequency $\omega_0$. We show such a plot of $P(\slI\to\slII\,)$ in Fig. 9–7. (The vertical scale has been adjusted to be $1$ at the peak by dividing by the value of the probability when $\omega=\omega_0$.) We have seen a curve like this in the diffraction theory, so you should already be familiar with it. The curve falls rather abruptly to zero for $(\omega-\omega_0)=2\pi/T$ and never regains significant size for large frequency deviations. In fact, by far the greatest part of the area under the curve lies within the range $\pm\pi/T$. It is possible to show5 that the area under the curve is just $2\pi/T$ and is equal to the area of the shaded rectangle drawn in the figure. Let’s examine the implication of our results for a real maser. Suppose that the ammonia molecule is in the cavity for a reasonable length of time, say for one millisecond. Then for $f_0=24{,}000$ megacycles, we can calculate that the probability for a transition falls to zero for a frequency deviation of $(f-f_0)/f_0=1/f_0T$, which is four parts in $10^8$. Evidently the frequency must be very close to $\omega_0$ to get a significant transition probability. Such an effect is the basis of the great precision that can be obtained with “atomic” clocks, which work on the maser principle. |
|
3 | 9 | The Ammonia Maser | 6 | The absorption of light | Our treatment above applies to a more general situation than the ammonia maser. We have treated the behavior of a molecule under the influence of an electric field, whether that field was confined in a cavity or not. So we could be simply shining a beam of “light”—at microwave frequencies—at the molecule and ask for the probability of emission or absorption. Our equations apply equally well to this case, but let’s rewrite them in terms of the intensity of the radiation rather than the electric field. If we define the intensity $\intensity$ to be the average energy flow per unit area per second, then from Chapter 27 of Volume II, we can write \begin{equation*} \intensity=\epsO c^2\abs{\Efieldvec\times\FLPB}_{\text{ave}}= \tfrac{1}{2}\epsO c^2\abs{\Efieldvec\times\FLPB}_{\text{max}}= 2\epsO c\Efield_0^2. \end{equation*} (The maximum value of $\Efield$ is $2\Efield_0$.) The transition probability now becomes: \begin{equation} \label{Eq:III:9:53} P(\slI\to\slII\,)=2\pi\biggl[\frac{\mu^2}{4\pi\epsO\hbar^2c}\biggr] \intensity T^2\, \frac{\sin^2[(\omega-\omega_0)T/2]} {[(\omega-\omega_0)T/2]^2}. \end{equation}
\begin{gather} \label{Eq:III:9:53} P(\slI\to\slII\,)=\\[1ex] 2\pi\biggl[\frac{\mu^2}{4\pi\epsO\hbar^2c}\biggr] \intensity T^2\,\frac{\sin^2[(\omega-\omega_0)T/2]} {[(\omega-\omega_0)T/2]^2}.\notag \end{gather}
Ordinarily the light shining on such a system is not exactly monochromatic. It is, therefore, interesting to solve one more problem—that is, to calculate the transition probability when the light has intensity $\intensity(\omega)$ per unit frequency interval, covering a broad range which includes $\omega_0$. Then, the probability of going from $\ketsl{\slI}$ to $\ketsl{\slII}$ will become an integral: \begin{equation} \label{Eq:III:9:54} P(\slI\to\slII\,)=2\pi\biggl[\frac{\mu^2}{4\pi\epsO\hbar^2c}\biggr]T^2 \int_0^\infty\intensity(\omega)\, \frac{\sin^2[(\omega-\omega_0)T/2]} {[(\omega-\omega_0)T/2]^2}\,d\omega. \end{equation}
\begin{gather} \label{Eq:III:9:54} P(\slI\to\slII\,)=\\[1ex] 2\pi\biggl[\frac{\mu^2}{4\pi\epsO\hbar^2c}\biggr]T^2\!\! \int_0^\infty\!\!\!\intensity(\omega)\, \frac{\sin^2[(\omega-\omega_0)T/2]} {[(\omega-\omega_0)T/2]^2}\,d\omega.\notag \end{gather} In general, $\intensity(\omega)$ will vary much more slowly with $\omega$ than the sharp resonance term. The two functions might appear as shown in Fig. 9–8. In such cases, we can replace $\intensity(\omega)$ by its value $\intensity(\omega_0)$ at the center of the sharp resonance curve and take it outside of the integral. What remains is just the integral under the curve of Fig. 9–7, which is, as we have seen, just equal to $2\pi/T$. We get the result that \begin{equation} \label{Eq:III:9:55} P(\slI\to\slII\,)=4\pi^2 \biggl[\frac{\mu^2}{4\pi\epsO\hbar^2c}\biggr] \intensity(\omega_0)T. \end{equation} This is an important result, because it is the general theory of the absorption of light by any molecular or atomic system. Although we began by considering a case in which state $\ketsl{\slI}$ had a higher energy than state $\ketsl{\slII}$, none of our arguments depended on that fact. Equation (9.55) still holds if the state $\ketsl{\slI}$ has a lower energy than the state $\ketsl{\slII}$; then $P(\slI\to\slII\,)$ represents the probability for a transition with the absorption of energy from the incident electromagnetic wave. The absorption of light by any atomic system always involves the amplitude for a transition in an oscillating electric field between two states separated by an energy $E=\hbar\omega_0$. For any particular case, it is always worked out in just the way we have done here and gives an expression like Eq. (9.55). We, therefore, emphasize the following features of this result. First, the probability is proportional to $T$. In other words, there is a constant probability per unit time that transitions will occur. Second, this probability is proportional to the intensity of the light incident on the system. Finally, the transition probability is proportional to $\mu^2$, where, you remember, $\mu\Efield$ defined the shift in energy due to the electric field $\Efield$. Because of this, $\mu\Efield$ also appeared in Eqs. (9.38) and (9.39) as the coupling term that is responsible for the transition between the otherwise stationary states $\ketsl{\slI}$ and $\ketsl{\slII}$. In other words, for the small $\Efield$ we have been considering, $\mu\Efield$ is the so-called “perturbation term” in the Hamiltonian matrix element which connects the states $\ketsl{\slI}$ and $\ketsl{\slII}$. In the general case, we would have that $\mu\Efield$ gets replaced by the matrix element $\bracketsl{\slII}{H}{\slI}$ (see Section 5–6). In Volume I (Section 42–5) we talked about the relations among light absorption, induced emission, and spontaneous emission in terms of the Einstein $A$- and $B$-coefficients. Here, we have at last the quantum mechanical procedure for computing these coefficients. What we have called $P(\slI\to\slII\,)$ for our two-state ammonia molecule corresponds precisely to the absorption coefficient $B_{nm}$ of the Einstein radiation theory. For the complicated ammonia molecule—which is too difficult for anyone to calculate—we have taken the matrix element $\bracketsl{\slII}{H}{\slI}$ as $\mu\Efield$, saying that $\mu$ is to be gotten from experiment. For simpler atomic systems, the $\mu_{mn}$ which belongs to any particular transition can be calculated from the definition \begin{equation} \label{Eq:III:9:56} \mu_{mn}\Efield=\bracket{m}{H}{n}=H_{mn}, \end{equation} where $H_{mn}$ is the matrix element of the Hamiltonian which includes the effects of a weak electric field. The $\mu_{mn}$ calculated in this way is called the electric dipole matrix element. The quantum mechanical theory of the absorption and emission of light is, therefore, reduced to a calculation of these matrix elements for particular atomic systems. Our study of a simple two-state system has thus led us to an understanding of the general problem of the absorption and emission of light. |
|
3 | 10 | Other Two-State Systems | 1 | The hydrogen molecular ion | In the last chapter we discussed some aspects of the ammonia molecule under the approximation that it can be considered as a two-state system. It is, of course, not really a two-state system—there are many states of rotation, vibration, translation, and so on—but each of these states of motion must be analyzed in terms of two internal states because of the flip-flop of the nitrogen atom. Here we are going to consider other examples of systems which, to some approximation or other, can be considered as two-state systems. Lots of things will be approximate because there are always many other states, and in a more accurate analysis they would have to be taken into account. But in each of our examples we will be able to understand a great deal by just thinking about two states. Since we will only be dealing with two-state systems, the Hamiltonian we need will look just like the one we used in the last chapter. When the Hamiltonian is independent of time, we know that there are two stationary states with definite—and usually different—energies. Generally, however, we start our analysis with a set of base states which are not these stationary states, but states which may, perhaps, have some other simple physical meaning. Then, the stationary states of the system will be represented by a linear combination of these base states. For convenience, we will summarize the important equations from Chapter 9. Let the original choice of base states be $\ketsl{\slOne}$ and $\ketsl{\slTwo}$. Then any state $\ket{\psi}$ is represented by the linear combination \begin{equation} \label{Eq:III:10:1} \ket{\psi}=\ketsl{\slOne}\braket{\slOne}{\psi}+ \ketsl{\slTwo}\braket{\slTwo}{\psi}= \ketsl{\slOne}C_1+\ketsl{\slTwo}C_2. \end{equation}
\begin{align} \ket{\psi}&=\ketsl{\slOne}\braket{\slOne}{\psi}+\ketsl{\slTwo}\braket{\slTwo}{\psi}\notag\\[1ex] \label{Eq:III:10:1} &=\ketsl{\slOne}C_1+\ketsl{\slTwo}C_2. \end{align} The amplitudes $C_i$ (by which we mean either $C_1$ or $C_2$) satisfy the two linear differential equations \begin{equation} \label{Eq:III:10:2} i\hbar\,\ddt{C_i}{t}=\sum_jH_{ij}C_j, \end{equation} where both $i$ and $j$ take on the values $1$ and $2$. When the terms of the Hamiltonian $H_{ij}$ do not depend on $t$, the two states of definite energy (the stationary states), which we call \begin{equation*} \ket{\psi_{\slI}}=\ketsl{\slI}e^{-(i/\hbar)E_{\slI}t}\quad \text{and}\quad \ket{\psi_{\slII}}=\ketsl{\slII}e^{-(i/\hbar)E_{\slII}t}, \end{equation*} have the energies \begin{equation} \begin{aligned} E_{\slI}&=\!\frac{H_{11}\!+H_{22}}{2}\!+\! \sqrt{\biggl(\!\frac{H_{11}\!-H_{22}}{2}\!\biggr)^2\!\!\!+ H_{12}H_{21}},\\[1.5ex] E_{\slII}&=\!\frac{H_{11}\!+H_{22}}{2}\!-\! \sqrt{\biggl(\!\frac{H_{11}\!-H_{22}}{2}\!\biggr)^2\!\!\!+ H_{12}H_{21}}. \end{aligned} \label{Eq:III:10:3} \end{equation} The two $C$’s for each of these states have the same time dependence. The state vectors $\ketsl{\slI}$ and $\ketsl{\slII}$ which go with the stationary states are related to our original base states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$ by \begin{equation} \begin{aligned} \ketsl{\slI}&=\ketsl{\slOne}a_1+\ketsl{\slTwo}a_2,\\[1ex] \ketsl{\slII}&=\ketsl{\slOne}a_1'+\ketsl{\slTwo}a_2'. \end{aligned} \label{Eq:III:10:4} \end{equation} The $a$’s are complex constants, which satisfy \begin{gather} \abs{a_1}^2+\abs{a_2}^2=1,\notag\\[2ex] \label{Eq:III:10:5} \frac{a_1}{a_2}= \frac{H_{12}}{E_{\slI}-H_{11}},\\[2ex] \abs{a_1'}^2+\abs{a_2'}^2=1,\notag\\[2ex] \label{Eq:III:10:6} \frac{a_1'}{a_2'}= \frac{H_{12}}{E_{\slII}-H_{11}}. \end{gather} If $H_{11}$ and $H_{22}$ are equal—say both are equal to $E_0$—and $H_{12}=H_{21}=-A$, then $E_{\slI}=E_0+A$, $E_{\slII}=E_0-A$, and the states $\ketsl{\slI}$ and $\ketsl{\slII}$ are particularly simple: \begin{equation} \label{Eq:III:10:7} \ketsl{\slI}=\frac{1}{\sqrt{2}}\,\biggl[ \ketsl{\slOne}-\ketsl{\slTwo}\biggr],\quad \ketsl{\slII}=\frac{1}{\sqrt{2}}\,\biggl[ \ketsl{\slOne}+\ketsl{\slTwo}\biggr]. \end{equation}
\begin{equation} \begin{aligned} \ketsl{\slI}=\frac{1}{\sqrt{2}}\,\biggl[ \ketsl{\slOne}-\ketsl{\slTwo}\biggr],\\[1.5ex] \ketsl{\slII}=\frac{1}{\sqrt{2}}\,\biggl[ \ketsl{\slOne}+\ketsl{\slTwo}\biggr]. \end{aligned} \label{Eq:III:10:7} \end{equation}
Now we will use these results to discuss a number of interesting examples taken from the fields of chemistry and physics. The first example is the hydrogen molecular ion. A positively ionized hydrogen molecule consists of two protons with one electron worming its way around them. If the two protons are very far apart, what states would we expect for this system? The answer is pretty clear: The electron will stay close to one proton and form a hydrogen atom in its lowest state, and the other proton will remain alone as a positive ion. So, if the two protons are far apart, we can visualize one physical state in which the electron is “attached” to one of the protons. There is, clearly, another state symmetric to that one in which the electron is near the other proton, and the first proton is the one that is an ion. We will take these two as our base states, and we’ll call them $\ketsl{\slOne}$ and $\ketsl{\slTwo}$. They are sketched in Fig. 10–1. Of course, there are really many states of an electron near a proton, because the combination can exist as any one of the excited states of the hydrogen atom. We are not interested in that variety of states now; we will consider only the situation in which the hydrogen atom is in the lowest state—its ground state—and we will, for the moment, disregard spin of the electron. We can just suppose that for all our states the electron has its spin “up” along the $z$-axis.1 Now to remove an electron from a hydrogen atom requires $13.6$ electron volts of energy. So long as the two protons of the hydrogen molecular ion are far apart, it still requires about this much energy—which is for our present considerations a great deal of energy—to get the electron somewhere near the midpoint between the protons. So it is impossible, classically, for the electron to jump from one proton to the other. However, in quantum mechanics it is possible—though not very likely. There is some small amplitude for the electron to move from one proton to the other. As a first approximation, then, each of our base states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$ will have the energy $E_0$, which is just the energy of one hydrogen atom plus one proton. We can take that the Hamiltonian matrix elements $H_{11}$ and $H_{22}$ are both approximately equal to $E_0$. The other matrix elements $H_{12}$ and $H_{21}$, which are the amplitudes for the electron to go back and forth, we will again write as $-A$. You see that this is the same game we played in the last two chapters. If we disregard the fact that the electron can flip back and forth, we have two states of exactly the same energy. This energy will, however, be split into two energy levels by the possibility of the electron going back and forth—the greater the probability of the transition, the greater the split. So the two energy levels of the system are $E_0+A$ and $E_0-A$; and the states which have these definite energies are given by Eqs. (10.7). From our solution we see that if a proton and a hydrogen atom are put anywhere near together, the electron will not stay on one of the protons but will flip back and forth between the two protons. If it starts on one of the protons, it will oscillate back and forth between the states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$—giving a time-varying solution. In order to have the lowest energy solution (which does not vary with time), it is necessary to start the system with equal amplitudes for the electron to be around each proton. Remember, there are not two electrons—we are not saying that there is an electron around each proton. There is only one electron, and it has the same amplitude—$1/\sqrt{2}$ in magnitude—to be in either position. Now the amplitude $A$ for an electron which is near one proton to get to the other one depends on the separation between the protons. The closer the protons are together, the larger the amplitude. You remember that we talked in Chapter 7 about the amplitude for an electron to “penetrate a barrier,” which it could not do classically. We have the same situation here. The amplitude for an electron to get across decreases roughly exponentially with the distance—for large distances. Since the transition probability, and therefore $A$, gets larger when the protons are closer together, the separation of the energy levels will also get larger. If the system is in the state $\ketsl{\slI}$, the energy $E_0+A$ increases with decreasing distance, so these quantum mechanical effects make a repulsive force tending to keep the protons apart. On the other hand, if the system is in the state $\ketsl{\slII}$, the total energy decreases if the protons are brought closer together; there is an attractive force pulling the protons together. The variation of the two energies with the distance between the two protons should be roughly as shown in Fig. 10–2. We have, then, a quantum-mechanical explanation of the binding force that holds the $\text{H}_2^+$ ion together. We have, however, forgotten one thing. In addition to the force we have just described, there is also an electrostatic repulsive force between the two protons. When the two protons are far apart—as in Fig. 10–1—the “bare” proton sees only a neutral atom, so there is a negligible electrostatic force. At very close distances, however, the “bare” proton begins to get “inside” the electron distribution—that is, it is closer to the proton on the average than to the electron. So there begins to be some extra electrostatic energy which is, of course, positive. This energy—which also varies with the separation—should be included in $E_0$. So for $E_0$ we should take something like the broken-line curve in Fig. 10–2 which rises rapidly for distances less than the radius of a hydrogen atom. We should add and subtract the flip-flop energy $A$ from this $E_0$. When we do that, the energies $E_{\slI}$ and $E_{\slII}$ will vary with the interproton distance $D$ as shown in Fig. 10–3. [In this figure, we have plotted the results of a more detailed calculation. The interproton distance is given in units of $1$ Å ($10^{-8}$ cm), and the excess energy over a proton plus a hydrogen atom is given in units of the binding energy of the hydrogen atom—the so-called “Rydberg” energy, $13.6$ eV.] We see that the state $\ketsl{\slII}$ has a minimum-energy point. This will be the equilibrium configuration—the lowest energy condition—for the $\text{H}_2^+$ ion. The energy at this point is lower than the energy of a separated proton and hydrogen ion, so the system is bound. A single electron acts to hold the two protons together. A chemist would call it a “one-electron bond.” This kind of chemical binding is also often called “quantum mechanical resonance” (by analogy with the two coupled pendulums we have described before). But that really sounds more mysterious than it is, it’s only a “resonance” if you start out by making a poor choice for your base states—as we did also! If you picked the state $\ketsl{\slII}$, you would have the lowest energy state—that’s all. We can see in another way why such a state should have a lower energy than a proton and a hydrogen atom. Let’s think about an electron near two protons with some fixed, but not too large, separation. You remember that with a single proton the electron is “spread out” because of the uncertainty principle. It seeks a balance between having a low Coulomb potential energy and not getting confined into too small a space, which would make a high kinetic energy (because of the uncertainty relation $\Delta p\,\Delta x\approx\hbar$). Now if there are two protons, there is more space where the electron can have a low potential energy. It can spread out—lowering its kinetic energy—without increasing its potential energy. The net result is a lower energy than a proton and a hydrogen atom. Then why does the other state $\ketsl{\slI}$ have a higher energy? Notice that this state is the difference of the states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$. Because of the symmetry of $\ketsl{\slOne}$ and $\ketsl{\slTwo}$, the difference must have zero amplitude to find the electron half-way between the two protons. This means that the electron is somewhat more confined, which leads to a larger energy. We should say that our approximate treatment of the $\text{H}_2^+$ ion as a two-state system breaks down pretty badly once the protons get as close together as they are at the minimum in the curve of Fig. 10–3, and so, will not give a good value for the actual binding energy. For small separations, the energies of the two “states” we imagined in Fig. 10–1 are not really equal to $E_0$; a more refined quantum mechanical treatment is needed. Suppose we ask now what would happen if instead of two protons, we had two different objects—as, for example, one proton and one lithium positive ion (both particles still with a single positive charge). In such a case, the two terms $H_{11}$ and $H_{22}$ of the Hamiltonian would no longer be equal; they would, in fact, be quite different. If it should happen that the difference $(H_{11}-H_{22})$ is, in absolute value, much greater than $A=-H_{12}$, the attractive force gets very weak, as we can see in the following way. If we put $H_{12}H_{21}=A^2$ into Eqs. (10.3) we get \begin{equation*} E=\frac{H_{11}+H_{22}}{2}\pm \frac{H_{11}-H_{22}}{2} \sqrt{1+\frac{4A^2}{(H_{11}-H_{22})^2}}. \end{equation*} When $H_{11}-H_{22}$ is much greater than $A^2$, the square root is very nearly equal to \begin{equation*} 1+\frac{2A^2}{(H_{11}-H_{22})^2}. \end{equation*} The two energies are then \begin{equation} \begin{aligned} E_{\slI}&=H_{11}+ \frac{A^2}{(H_{11}-H_{22})},\\[1ex] E_{\slII}&=H_{22}- \frac{A^2}{(H_{11}-H_{22})}. \end{aligned} \label{Eq:III:10:8} \end{equation} They are now very nearly just the energies $H_{11}$ and $H_{22}$ of the isolated atoms, pushed apart only slightly by the flip-flop amplitude $A$. The energy difference $E_{\slI}-E_{\slII}$ is \begin{equation*} (H_{11}-H_{22})+\frac{2A^2}{H_{11}-H_{22}}. \end{equation*} The additional separation from the flip-flop of the electron is no longer equal to $2A$; it is smaller by the factor $A/(H_{11}-H_{22})$, which we are now taking to be much less than one. Also, the dependence of $E_{\slI}-E_{\slII}$ on the separation of the two nuclei is much smaller than for the $\text{H}_2^+$ ion—it is also reduced by the factor $A/(H_{11}-H_{22})$. We can now see why the binding of unsymmetric diatomic molecules is generally very weak. In our theory of the $\text{H}_2^+$ ion we have discovered an explanation for the mechanism by which an electron shared by two protons provides, in effect, an attractive force between the two protons which can be present even when the protons are at large distances. The attractive force comes from the reduced energy of the system due to the possibility of the electron jumping from one proton to the other. In such a jump the system changes from the configuration (hydrogen atom, proton) to the configuration (proton, hydrogen atom), or switches back. We can write the process symbolically as \begin{equation*} (H,p)\rightleftharpoons (p,H). \end{equation*} The energy shift due to this process is proportional to the amplitude $A$ that an electron whose energy is $-W_H$ (its binding energy in the hydrogen atom) can get from one proton to the other. For large distances $R$ between the two protons, the electrostatic potential energy of the electron is nearly zero over most of the space it must go when it makes its jump. In this space, then, the electron moves nearly like a free particle in empty space—but with a negative energy! We have seen in Chapter 3 [Eq. (3.7)] that the amplitude for a particle of definite energy to get from one place to another a distance $r$ away is proportional to \begin{equation*} \frac{e^{(i/\hbar)pr}}{r}, \end{equation*} where $p$ is the momentum corresponding to the definite energy. In the present case (using the nonrelativistic formula), $p$ is given by \begin{equation} \label{Eq:III:10:9} \frac{p^2}{2m}=-W_H. \end{equation} This means that $p$ is an imaginary number, \begin{equation*} p=i\sqrt{2mW_H} \end{equation*} (the other sign for the radical gives nonsense here). We should expect, then, that the amplitude $A$ for the $\text{H}_2^+$ ion will vary as \begin{equation} \label{Eq:III:10:10} A\propto\frac{e^{-(\sqrt{2mW_H}/\hbar)R}}{R} \end{equation} for large separations $R$ between the two protons. The energy shift due to the electron binding is proportional to $A$, so there is a force pulling the two protons together which is proportional—for large $R$—to the derivative of (10.10) with respect to $R$. Finally, to be complete, we should remark that in the two-proton, one-electron system there is still one other effect which gives a dependence of the energy on $R$. We have neglected it until now because it is usually rather unimportant—the exception is just for those very large distances where the energy of the exchange term $A$ has decreased exponentially to very small values. The new effect we are thinking of is the electrostatic attraction of the proton for the hydrogen atom, which comes about in the same way any charged object attracts a neutral object. The bare proton makes an electric field $\Efield$ (varying as $1/R^2$) at the neutral hydrogen atom. The atom becomes polarized, taking on an induced dipole moment $\mu$ proportional to $\Efield$. The energy of the dipole is $\mu\Efield$, which is proportional to $\Efield^2$—or to $1/R^4$. So there is a term in the energy of the system which decreases with the fourth power of the distance. (It is a correction to $E_0$.) This energy falls off with distance more slowly than the shift $A$ given by (10.10); at some large separation $R$ it becomes the only remaining important term giving a variation of energy with $R$—and, therefore, the only remaining force. Note that the electrostatic term has the same sign for both of the base states (the force is attractive, so the energy is negative) and so also for the two stationary states, whereas the electron exchange term $A$ gives opposite signs for the two stationary states. |
|
3 | 10 | Other Two-State Systems | 2 | Nuclear forces | We have seen that the system of a hydrogen atom and a proton has an energy of interaction due to the exchange of the single electron which varies at large separations $R$ as \begin{equation} \label{Eq:III:10:11} \frac{e^{-\alpha R}}{R}, \end{equation} with $\alpha=\sqrt{2mW_H}/\hbar$. (One usually says that there is an exchange of a “virtual” electron when—as here—the electron has to jump across a space where it would have a negative energy. More specifically, a “virtual exchange” means that the phenomenon involves a quantum mechanical interference between an exchanged state and a nonexchanged state.) Now we might ask the following question: Could it be that forces between other kinds of particles have an analogous origin? What about, for example, the nuclear force between a neutron and a proton, or between two protons? In an attempt to explain the nature of nuclear forces, Yukawa proposed that the force between two nucleons is due to a similar exchange effect—only, in this case, due to the virtual exchange, not of an electron, but of a new particle, which he called a “meson.” Today, we would identify Yukawa’s meson with the $\pi$-meson (or “pion”) produced in high-energy collisions of protons or other particles. Let’s see, as an example, what kind of a force we would expect from the exchange of a positive pion ($\pi^+$) of mass $m_\pi$ between a proton and a neutron. Just as a hydrogen atom H$^0$ can go into a proton p$^+$ by giving up an electron e$^-$, \begin{equation} \label{Eq:III:10:12} \text{H}^0\to\text{p}^++\text{e}^-, \end{equation} a proton p$^+$ can go into a neutron n$^0$ by giving up a $\pi^+$ meson: \begin{equation} \label{Eq:III:10:13} \text{p}^+\to\text{n}^0+\pi^+. \end{equation} So if we have a proton at $a$ and a neutron at $b$ separated by the distance $R$, the proton can become a neutron by emitting a $\pi^+$, which is then absorbed by the neutron at $b$, turning it into a proton. There is an energy of interaction of the two-nucleon (plus pion) system which depends on the amplitude $A$ for the pion exchange—just as we found for the electron exchange in the $\text{H}_2^+$ ion. In the process (10.12), the energy of the H$^0$ atom is less than that of the proton by $W_H$ (calculating nonrelativistically, and omitting the rest energy $mc^2$ of the electron), so the electron has a negative kinetic energy—or imaginary momentum—as in Eq. (10.9). In the nuclear process (10.13), the proton and neutron have almost equal masses, so the $\pi^+$ will have zero total energy. The relation between the total energy $E$ and the momentum $p$ for a pion of mass $m_\pi$ is \begin{equation*} E^2=p^2c^2+m_\pi^2c^4. \end{equation*} Since $E$ is zero (or at least negligible in comparison with $m_\pi$), the momentum is again imaginary: \begin{equation*} p=im_\pi c. \end{equation*} Using the same arguments we gave for the amplitude that a bound electron would penetrate the barrier in the space between two protons, we get for the nuclear case an exchange amplitude $A$ which should—for large $R$—go as \begin{equation} \label{Eq:III:10:14} \frac{e^{-(m_\pi c/\hbar)R}}{R}. \end{equation} The interaction energy is proportional to $A$, and so varies in the same way. We get an energy variation in the form of the so-called Yukawa potential between two nucleons. Incidentally, we obtained this same formula earlier directly from the differential equation for the motion of a pion in free space [see Chapter 28, Vol. II, Eq. (28.18)]. We can, following the same line of argument, discuss the interaction between two protons (or between two neutrons) which results from the exchange of a neutral pion ($\pi^0$). The basic process is now \begin{equation} \label{Eq:III:10:15} \text{p}^+\to\text{p}^++\pi^0. \end{equation} A proton can emit a virtual $\pi^0$, but then it remains still a proton. If we have two protons, proton No. $1$ can emit a virtual $\pi^0$ which is absorbed by proton No. $2$. At the end, we still have two protons. This is somewhat different from the $\text{H}_2^+$ ion. There the H$^0$ went into a different condition—the proton—after emitting the electron. Now we are assuming that a proton can emit a $\pi^0$ without changing its character. Such processes are, in fact, observed in high-energy collisions. The process is analogous to the way that an electron emits a photon and ends up still an electron: \begin{equation} \label{Eq:III:10:16} \text{e}\to\text{e}+\text{photon}. \end{equation} We do not “see” the photons inside the electrons before they are emitted or after they are absorbed, and their emission does not change the “nature” of the electron. Going back to the two protons, there is an interaction energy which arises from the amplitude $A$ that one proton emits a neutral pion which travels across (with imaginary momentum) to the other proton and is absorbed there. This amplitude is again proportional to (10.14), with $m_\pi$ the mass of the neutral pion. All the same arguments give an equal interaction energy for two neutrons. Since the nuclear forces (disregarding electrical effects) between neutron and proton, between proton and proton, between neutron and neutron are the same, we conclude that the masses of the charged and neutral pions should be the same. Experimentally, the masses are indeed very nearly equal, and the small difference is about what one would expect from electric self-energy corrections (see Chapter 28, Vol. II). There are other kinds of particles—like K-mesons—which can be exchanged between two nucleons. It is also possible for two pions to be exchanged at the same time. But all of these other exchanged “objects” have a rest mass $m_x$ higher than the pion mass $m_\pi$, and lead to terms in the exchange amplitude which vary as \begin{equation*} \frac{e^{-(m_xc/\hbar)R}}{R}. \end{equation*} These terms die out faster with increasing $R$ than the one-meson term. No one knows, today, how to calculate these higher-mass terms, but for large enough values of $R$ only the one-pion term survives. And, indeed, those experiments which involve nuclear interactions only at large distances do show that the interaction energy is as predicted from the one-pion exchange theory. In the classical theory of electricity and magnetism, the Coulomb electrostatic interaction and the radiation of light by an accelerating charge are closely related—both come out of the Maxwell equations. We have seen in the quantum theory that light can be represented as the quantum excitations of the harmonic oscillations of the classical electromagnetic fields in a box. Alternatively, the quantum theory can be set up by describing light in terms of particles—photons—which obey Bose statistics. We emphasized in Section 4–5 that the two alternative points of view always give identical predictions. Can the second point of view be carried through completely to include all electromagnetic effects? In particular, if we want to describe the electromagnetic field purely in terms of Bose particles—that is, in terms of photons—what is the Coulomb force due to? From the “particle” point of view the Coulomb interaction between two electrons comes from the exchange of a virtual photon. One electron emits a photon—as in reaction (10.16)—which goes over to the second electron, where it is absorbed in the reverse of the same reaction. The interaction energy is again given by a formula like (10.14), but now with $m_\pi$ replaced by the rest mass of the photon—which is zero. So the virtual exchange of a photon between two electrons gives an interaction energy that varies simply inversely as $R$, the distance between the two electrons—just the normal Coulomb potential energy! In the “particle” theory of electromagnetism, the process of a virtual photon exchange gives rise to all the phenomena of electrostatics. |
|
3 | 10 | Other Two-State Systems | 3 | The hydrogen molecule | As our next two-state system we will look at the neutral hydrogen molecule H$_2$. It is, naturally, more complicated to understand because it has two electrons. Again, we start by thinking of what happens when the two protons are well separated. Only now we have two electrons to add. To keep track of them, we’ll call one of them “electron $a$” and the other “electron $b$.” We can again imagine two possible states. One possibility is that “electron $a$” is around the first proton and “electron $b$” is around the second, as shown in the top half of Fig. 10–4. We have simply two hydrogen atoms. We will call this state $\ketsl{\slOne}$. There is also another possibility: that “electron $b$” is around the first proton and that “electron $a$” is around the second. We call this state $\ketsl{\slTwo}$. From the symmetry of the situation, those two possibilities should be energetically equivalent, but, as we will see, the energy of the system is not just the energy of two hydrogen atoms. We should mention that there are many other possibilities. For instance, “electron $a$” might be near the first proton and “electron $b$” might be in another state around the same proton. We’ll disregard such a case, since it will certainly have higher energy (because of the large Coulomb repulsion between the two electrons). For greater accuracy, we would have to include such states, but we can get the essentials of the molecular binding by considering just the two states of Fig. 10–4. To this approximation we can describe any state by giving the amplitude $\braket{\slOne}{\phi}$ to be in the state $\ketsl{\slOne}$ and an amplitude $\braket{\slTwo}{\phi}$ to be in state $\ketsl{\slTwo}$. In other words, the state vector $\ket{\phi}$ can be written as the linear combination \begin{equation*} \ket{\phi}=\sum_i\ket{i}\braket{i}{\phi}. \end{equation*} To proceed, we assume—as usual—that there is some amplitude $A$ that the electrons can move through the intervening space and exchange places. This possibility of exchange means that the energy of the system is split, as we have seen for other two-state systems. As for the hydrogen molecular ion, the splitting is very small when the distance between the protons is large. As the protons approach each other, the amplitude for the electrons to go back and forth increases, so the splitting increases. The decrease of the lower energy state means that there is an attractive force which pulls the atoms together. Again the energy levels rise when the protons get very close together because of the Coulomb repulsion. The net final result is that the two stationary states have energies which vary with the separation as shown in Fig. 10–5. At a separation of about $0.74$ Å, the lower energy level reaches a minimum; this is the proton-proton distance of the true hydrogen molecule. Now you have probably been thinking of an objection. What about the fact that the two electrons are identical particles? We have been calling them “electron $a$” and “electron $b$,” but there really is no way to tell which is which. And we have said in Chapter 4 that for electrons—which are Fermi particles—if there are two ways something can happen by exchanging the electrons, the two amplitudes will interfere with a negative sign. This means that if we switch which electron is which, the sign of the amplitude must reverse. We have just concluded, however, that the bound state of the hydrogen molecule would be (at $t=0$) \begin{equation*} \ketsl{\slII}=\frac{1}{\sqrt{2}}\,(\ketsl{\slOne}+\ketsl{\slTwo}). \end{equation*} However, according to our rules of Chapter 4, this state is not allowed. If we reverse which electron is which, we get the state \begin{equation*} \frac{1}{\sqrt{2}}\,(\ketsl{\slTwo}+\ketsl{\slOne}). \end{equation*} and we get the same sign instead of the opposite one. These arguments are correct if both electrons have the same spin. It is true that if both electrons have spin up (or both have spin down), the only state that is permitted is \begin{equation*} \ketsl{\slI}=\frac{1}{\sqrt{2}}\,(\ketsl{\slOne}-\ketsl{\slTwo}). \end{equation*} For this state, an interchange of the two electrons gives \begin{equation*} \frac{1}{\sqrt{2}}\,(\ketsl{\slTwo}-\ketsl{\slOne}). \end{equation*} which is $-\ketsl{\slI}$, as required. So if we bring two hydrogen atoms near to each other with their electrons spinning in the same direction, they can go into the state $\ketsl{\slI}$ and not state $\ketsl{\slII}$. But notice that state $\ketsl{\slI}$ is the upper energy state. Its curve of energy versus separation has no minimum. The two hydrogens will always repel and will not form a molecule. So we conclude that the hydrogen molecule cannot exist with parallel electron spins. And that is right. On the other hand, our state $\ketsl{\slII}$ is perfectly symmetric for the two electrons. In fact, if we interchange which electron we call $a$ and which we call $b$ we get back exactly the same state. We saw in Section 4–7 that if two Fermi particles are in the same state, they must have opposite spins. So, the bound hydrogen molecule must have one electron with spin up and one with spin down. The whole story of the hydrogen molecule is really somewhat more complicated if we want to include the proton spins. It is then no longer right to think of the molecule as a two-state system. It should really be looked at as an eight-state system—there are four possible spin arrangements for each of our states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$—so we were cutting things a little short by neglecting the spins. Our final conclusions are, however, correct. We find that the lowest energy state—the only bound state—of the H$_2$ molecule has the two electrons with spins opposite. The total spin angular momentum of the electrons is zero. On the other hand, two nearby hydrogen atoms with spins parallel—and so with a total angular momentum $\hbar$—must be in a higher (unbound) energy state; the atoms repel each other. There is an interesting correlation between the spins and the energies. It gives another illustration of something we mentioned before, which is that there appears to be an “interaction” energy between two spins because the case of parallel spins has a higher energy than the opposite case. In a certain sense you could say that the spins try to reach an antiparallel condition and, in doing so, have the potential to liberate energy—not because there is a large magnetic force, but because of the exclusion principle. We saw in Section 10–1 that the binding of two different ions by a single electron is likely to be quite weak. This is not true for binding by two electrons. Suppose the two protons in Fig. 10–4 were replaced by any two ions (with closed inner electron shells and a single ionic charge), and that the binding energies of an electron at the two ions are different. The energies of states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$ would still be equal because in each of these states we have one electron bound to each ion. Therefore, we always have the splitting proportional to $A$. Two-electron binding is ubiquitous—it is the most common valence bond. Chemical binding usually involves this flip-flop game played by two electrons. Although two atoms can be bound together by only one electron, it is relatively rare—because it requires just the right conditions. Finally, we want to mention that if the energy of attraction for an electron to one nucleus is much greater than to the other, then what we have said earlier about ignoring other possible states is no longer right. Suppose nucleus $a$ (or it may be a positive ion) has a much stronger attraction for an electron than does nucleus $b$. It may then happen that the total energy is still fairly low even when both electrons are at nucleus $a$, and no electron is at nucleus $b$. The strong attraction may more than compensate for the mutual repulsion of the two electrons. If it does, the lowest energy state may have a large amplitude to find both electrons at $a$ (making a negative ion) and a small amplitude to find any electron at $b$. The state looks like a negative ion with a positive ion. This is, in fact, what happens in an “ionic” molecule like NaCl. You can see that all the gradations between covalent binding and ionic binding are possible. You can now begin to see how it is that many of the facts of chemistry can be most clearly understood in terms of a quantum mechanical description. |
|
3 | 10 | Other Two-State Systems | 4 | The benzene molecule | Chemists have invented nice diagrams to represent complicated organic molecules. Now we are going to discuss one of the most interesting of them—the benzene molecule shown in Fig. 10–6. It contains six carbon and six hydrogen atoms in a symmetrical array. Each bar of the diagram represents a pair of electrons, with spins opposite, doing the covalent bond dance. Each hydrogen atom contributes one electron and each carbon atom contributes four electrons to make up the total of $30$ electrons involved. (There are two more electrons close to the nucleus of the carbon which form the first, or K, shell. These are not shown since they are so tightly bound that they are not appreciably involved in the covalent binding.) So each bar in the figure represents a bond, or pair of electrons, and the double bonds mean that there are two pairs of electrons between alternate pairs of carbon atoms. There is a mystery about this benzene molecule. We can calculate what energy should be required to form this chemical compound, because the chemists have measured the energies of various compounds which involve pieces of the ring—for instance, they know the energy of a double bond by studying ethylene, and so on. We can, therefore, calculate the total energy we should expect for the benzene molecule. The actual energy of the benzene ring, however, is much lower than we get by such a calculation; it is more tightly bound than we would expect from what is called an “unsaturated double bond system.” Usually a double bond system which is not in such a ring is easily attacked chemically because it has a relatively high energy—the double bonds can be easily broken by the addition of other hydrogens. But in benzene the ring is quite permanent and hard to break up. In other words, benzene has a much lower energy than you would calculate from the bond picture. Then there is another mystery. Suppose we replace two adjacent hydrogens by bromine atoms to make ortho-dibromobenzene. There are two ways to do this, as shown in Fig. 10–7. The bromines could be on the opposite ends of a double bond as shown in part (a) of the figure, or could be on the opposite ends of a single bond as in (b). One would think that ortho-dibromobenzene should have two different forms, but it doesn’t. There is only one such chemical.2 Now we want to resolve these mysteries—and perhaps you have already guessed how: by noticing, of course, that the “ground state” of the benzene ring is really a two-state system. We could imagine that the bonds in benzene could be in either of the two arrangements shown in Fig. 10–8. You say, “But they are really the same; they should have the same energy.” Indeed, they should. And for that reason they must be analyzed as a two-state system. Each state represents a different configuration of the whole set of electrons, and there is some amplitude $A$ that the whole bunch can switch from one arrangement to the other—there is a chance that the electrons can flip from one dance to the other. As we have seen, this chance of flipping makes a mixed state whose energy is lower than you would calculate by looking separately at either of the two pictures in Fig. 10–8. Instead, there are two stationary states—one with an energy above and one with an energy below the expected value. So actually, the true normal state (lowest energy) of benzene is neither of the possibilities shown in Fig. 10–8, but it has the amplitude $1/\sqrt{2}$ to be in each of the states shown. It is the only state that is involved in the chemistry of benzene at normal temperatures. Incidentally, the upper state also exists; we can tell it is there because benzene has a strong absorption for ultraviolet light at the frequency $\omega=(E_{\slI}-E_{\slII})/\hbar$. You will remember that in ammonia, where the object flipping back and forth was three protons, the energy separation was in the microwave region. In benzene, the objects are electrons, and because they are much lighter, they find it easier to flip back and forth, which makes the coefficient $A$ very much larger. The result is that the energy difference is much larger—about $3$ eV, which is the energy of an ultraviolet photon.3 What happens if we substitute bromine? Again the two “possibilities” (a) and (b) in Fig. 10–7 represent the two different electron configurations. The only difference is that the two base states we start with would have slightly different energies. The lowest energy stationary state will still involve a linear combination of the two states, but with unequal amplitudes. The amplitude for state $\ketsl{\slOne}$ might have a value something like $\sqrt{2/3}$, say, whereas state $\ketsl{\slTwo}$ would have the magnitude $\sqrt{1/3}$. We can’t say for sure without more information, but once the two energies $H_{11}$ and $H_{22}$ are no longer equal, then the amplitudes $C_1$ and $C_2$ no longer have equal magnitudes. This means, of course, that one of the two possibilities in the figure is more likely than the other, but the electrons are mobile enough so that there is some amplitude for both. The other state has different amplitudes (like $\sqrt{1/3}$ and $-\sqrt{2/3}$) but lies at a higher energy. There is only one lowest state, not two as the naive theory of fixed chemical bonds would suggest. |
|
3 | 10 | Other Two-State Systems | 5 | Dyes | We will give you one more chemical example of the two-state phenomenon—this time on a larger molecular scale. It has to do with the theory of dyes. Many dyes—in fact, most artificial dyes—have an interesting characteristic; they have a kind of symmetry. Figure 10–9 shows an ion of a particular dye called magenta, which has a purplish red color. The molecule has three ring structures—two of which are benzene rings. The third is not exactly the same as a benzene ring because it has only two double bonds inside the ring. The figure shows two equally satisfactory pictures, and we would guess that they should have equal energies. But there is a certain amplitude that all the electrons can flip from one condition to the other, shifting the position of the “unfilled” position to the opposite end. With so many electrons involved, the flipping amplitude is somewhat lower than it is in the case of benzene, and the difference in energy between the two stationary states is smaller. There are, nevertheless, the usual two stationary states $\ketsl{\slI}$ and $\ketsl{\slII}$ which are the sum and difference combinations of the two base states shown in the figure. The energy separation of $\ketsl{\slI}$ and $\ketsl{\slII}$ comes out to be equal to the energy of a photon in the optical region. If one shines light on the molecule, there is a very strong absorption at one frequency, and it appears to be brightly colored. That’s why it’s a dye! Another interesting feature of such a dye molecule is that in the two base states shown, the center of electric charge is located at different places. As a result, the molecule should be strongly affected by an external electric field. We had a similar effect in the ammonia molecule. Evidently we can analyze it by using exactly the same mathematics, provided we know the numbers $E_0$ and $A$. Generally, these are obtained by gathering experimental data. If one makes measurements with many dyes, it is often possible to guess what will happen with some related dye molecule. Because of the large shift in the position of the center of electric charge the value of $\mu$ in formula (9.55) is large and the material has a high probability for absorbing light of the characteristic frequency $2A/\hbar$. Therefore, it is not only colored but very strongly so—a small amount of substance absorbs a lot of light. The rate of flipping—and, therefore, $A$—is very sensitive to the complete structure of the molecule. By changing $A$, the energy splitting, and with it the color of the dye, can be changed. Also, the molecules do not have to be perfectly symmetrical. We have seen that the same basic phenomenon exists with slight modifications, even if there is some small asymmetry present. So, one can get some modification of the colors by introducing slight asymmetries in the molecules. For example, another important dye, malachite green, is very similar to magenta, but has two of the hydrogens replaced by CH$_3$. It’s a different color because the $A$ is shifted and the flip-flop rate is changed. |
|
3 | 10 | Other Two-State Systems | 6 | The Hamiltonian of a spin one-half particle in a magnetic field | Now we would like to discuss a two-state system involving an object of spin one-half. Some of what we will say has been covered in earlier chapters, but doing it again may help to make some of the puzzling points a little clearer. We can think of an electron at rest as a two-state system. Although we will be talking in this section about “an electron,” what we find out will be true for any spin one-half particle. Suppose we choose for our base states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$ the states in which the $z$-component of the electron spin is $+\hbar/2$ and $-\hbar/2$. These states are, of course, the same ones we have called $(+)$ and $(-)$ in earlier chapters. To keep the notation of this chapter consistent, though, we call the “plus” spin state $\ketsl{\slOne}$ and the “minus” spin state $\ketsl{\slTwo}$—where “plus” and “minus” refer to the angular momentum in the $z$-direction. Any possible state $\psi$ for the electron can be described as in Eq. (10.1) by giving the amplitude $C_1$ that the electron is in state $\ketsl{\slOne}$, and the amplitude $C_2$ that it is in state $\ketsl{\slTwo}$. To treat this problem, we will need to know the Hamiltonian for this two-state system—that is, for an electron in a magnetic field. We begin with the special case of a magnetic field in the $z$-direction. Suppose that the vector $\FLPB$ has only a $z$-component $B_z$. From the definition of the two base states (that is, spins parallel and antiparallel to $\FLPB$) we know that they are already stationary states with a definite energy in the magnetic field. State $\ketsl{\slOne}$ corresponds to an energy4 equal to $-\mu B_z$ and state $\ketsl{\slTwo}$ to $+\mu B_z$. The Hamiltonian must be very simple in this case since $C_1$, the amplitude to be in state $\ketsl{\slOne}$, is not affected by $C_2$, and vice versa: \begin{equation} \begin{aligned} i\hbar\,\ddt{C_1}{t}&=E_1C_1=-\mu B_zC_1,\\[2ex] i\hbar\,\ddt{C_2}{t}&=E_2C_2=+\mu B_zC_2. \end{aligned} \label{Eq:III:10:17} \end{equation} For this special case, the Hamiltonian is \begin{alignat}{2} H_{11}&=-\mu B_z,&\quad H_{12}&=0,\notag\\[2ex] \label{Eq:III:10:18} H_{21}&=0,&\quad H_{22}&=+\mu B_z. \end{alignat} So we know what the Hamiltonian is for the magnetic field in the $z$-direction, and we know the energies of the stationary states. Now suppose the field is not in the $z$-direction. What is the Hamiltonian? How are the matrix elements changed if the field is not in the $z$-direction? We are going to make an assumption that there is a kind of superposition principle for the terms of the Hamiltonian. More specifically, we want to assume that if two magnetic fields are superposed, the terms in the Hamiltonian simply add—if we know the $H_{ij}$ for a pure $B_z$ and we know the $H_{ij}$ for a pure $B_x$, then the $H_{ij}$ for both $B_z$ and $B_x$ together is simply the sum. This is certainly true if we consider only fields in the $z$-direction—if we double $B_z$, then all the $H_{ij}$ are doubled. So let’s assume that $H$ is linear in the field $\FLPB$. That’s all we need to be able to find the $H_{ij}$ for any magnetic field. Suppose we have a constant field $\FLPB$. We could have chosen our $z$-axis in its direction, and we would have found two stationary states with the energies $\mp\mu B$. Just choosing our axes in a different direction won’t change the physics. Our description of the stationary states will be different, but their energies will still be $\mp\mu B$—that is, \begin{equation} \begin{aligned} E_{\slI}&=-\mu\sqrt{B_x^2+B_y^2+B_z^2},\\[1ex] E_{\slII}&=+\mu\sqrt{B_x^2+B_y^2+B_z^2}. \end{aligned} \label{Eq:III:10:19} \end{equation} The rest of the game is easy. We have here the formulas for the energies. We want a Hamiltonian which is linear in $B_x$, $B_y$, and $B_z$, and which will give these energies when used in our general formula of Eq. (10.3). The problem: find the Hamiltonian. First, notice that the energy splitting is symmetric, with an average value of zero. Looking at Eq. (10.3), we can see directly that that requires \begin{equation*} H_{22}=-H_{11}. \end{equation*} (Note that this checks with what we already know when $B_x$ and $B_y$ are both zero; in that case $H_{11}=-\mu B_z$, and $H_{22}=\mu B_z$.) Now if we equate the energies of Eq. (10.3) with what we know from Eq. (10.19), we have \begin{equation} \label{Eq:III:10:20} \biggl(\!\frac{H_{11}\!-H_{22}}{2}\!\biggr)^2\!\!\!+\abs{H_{12}}^2= \mu^2(B_x^2\!+\!B_y^2\!+\!B_z^2). \end{equation} (We have also made use of the fact that $H_{21}=H_{12}\cconj$, so that $H_{12}H_{21}$ can also be written as $\abs{H_{12}}^2$.) Again for the special case of a field in the $z$-direction, this gives \begin{equation*} \mu^2B_z^2+\abs{H_{12}}^2=\mu^2B_z^2. \end{equation*} Clearly, $\abs{H_{12}}$ must be zero in this special case, which means that $H_{12}$ cannot have any terms in $B_z$. (Remember, we have said that all terms must be linear in $B_x$, $B_y$, and $B_z$.) So far, then, we have discovered that $H_{11}$ and $H_{22}$ have terms in $B_z$, while $H_{12}$ and $H_{21}$ do not. We can make a simple guess that will satisfy Eq. (10.20) if we say that \begin{align*} H_{11} &=-\mu B_z,\notag\\[2ex] H_{22} &=\mu B_z, \end{align*} and \begin{equation} \label{Eq:III:10:21} \quad\quad\abs{H_{12}}^2 =\mu^2(B_x^2+B_y^2). \end{equation} And it turns out that that’s the only way it can be done! “Wait”—you say—“$H_{12}$ is not linear in $B$; Eq. (10.21) gives $H_{12}=$$\mu\sqrt{B_x^2+B_y^2}$.” Not necessarily. There is another possibility which is linear, namely, \begin{equation*} H_{12}=\mu(B_x+iB_y). \end{equation*} There are, in fact, several such possibilities—most generally, we could write \begin{equation*} H_{12}=\mu(B_x\pm iB_y)e^{i\delta}, \end{equation*} where $\delta$ is some arbitrary phase. Which sign and phase should we use? It turns out that you can choose either sign, and any phase you want, and the physical results will always be the same. So the choice is a matter of convention. People ahead of us have chosen to use the minus sign and to take $e^{i\delta}=-1$. We might as well follow suit and write \begin{equation*} H_{12}=-\mu(B_x-iB_y),\quad H_{21}=-\mu(B_x+iB_y). \end{equation*} (Incidentally, these conventions are related to, and consistent with, some of the arbitrary choices we made in Chapter 6.) The complete Hamiltonian for an electron in an arbitrary magnetic field is, then \begin{equation} \begin{alignedat}{2} H_{11}&=-\mu B_z,&\quad H_{12}&=-\mu(B_x-iB_y),\\[1ex] H_{21}&=-\mu(B_x+iB_y),&\quad H_{22}&=+\mu B_z. \end{alignedat} \label{Eq:III:10:22} \end{equation} And the equations for the amplitudes $C_1$ and $C_2$ are \begin{equation} \begin{aligned} i\hbar\,\ddt{C_1}{t}&=-\mu[B_zC_1+(B_x-iB_y)C_2],\\[1ex] i\hbar\,\ddt{C_2}{t}&=-\mu[(B_x+iB_y)C_1-B_zC_2]. \end{aligned} \label{Eq:III:10:23} \end{equation} So we have discovered the “equations of motion for the spin states” of an electron in a magnetic field. We guessed at them by making some physical argument, but the real test of any Hamiltonian is that it should give predictions in agreement with experiment. According to any tests that have been made, these equations are right. In fact, although we made our arguments only for constant fields, the Hamiltonian we have written is also right for magnetic fields which vary with time. So we can now use Eq. (10.23) to look at all kinds of interesting problems. |
|
3 | 10 | Other Two-State Systems | 7 | The spinning electron in a magnetic field | Example number one: We start with a constant field in the $z$-direction. There are just the two stationary states with energies $\mp\mu B_z$. Suppose we add a small field in the $x$-direction. Then the equations look like our old two-state problem. We get the flip-flop business once more, and the energy levels are split a little farther apart. Now let’s let the $x$-component of the field vary with time—say, as $\cos\omega t$. The equations are then the same as we had when we put an oscillating electric field on the ammonia molecule in Chapter 9. You can work out the details in the same way. You will get the result that the oscillating field causes transitions from the $+z$-state to the $-z$-state—and vice versa—when the horizontal field oscillates near the resonant frequency $\omega_0=2\mu B_z/\hbar$. This gives the quantum mechanical theory of the magnetic resonance phenomena we described in Chapter 35 of Volume II. It is also possible to make a maser which uses a spin one-half system. A Stern-Gerlach apparatus is used to produce a beam of particles polarized in, say, the $+z$-direction, which are sent into a cavity in a constant magnetic field. The oscillating fields in the cavity can couple with the magnetic moment and induce transitions which give energy to the cavity. Now let’s look at the following question. Suppose we have a magnetic field $\FLPB$ which points in the direction whose polar angle is $\theta$ and azimuthal angle is $\phi$, as in Fig. 10–10. Suppose, additionally, that there is an electron which has been prepared with its spin pointing along this field. What are the amplitudes $C_1$ and $C_2$ for such an electron? In other words, calling the state of the electron $\ket{\psi}$, we want to write \begin{equation*} \ket{\psi}=\ketsl{\slOne}C_1+\ketsl{\slTwo}C_2, \end{equation*} where $C_1$ and $C_2$ are \begin{equation*} C_1=\braket{\slOne}{\psi},\quad C_2=\braket{\slTwo}{\psi}, \end{equation*} where by $\ketsl{\slOne}$ and $\ketsl{\slTwo}$ we mean the same thing we used to call $\ket{+}$ and $\ket{-}$ (referred to our chosen $z$-axis). The answer to this question is also in our general equations for two-state systems. First, we know that since the electron’s spin is parallel to $\FLPB$ it is in a stationary state with energy $E_{\slI}=-\mu B$. Therefore, both $C_1$ and $C_2$ must vary as $e^{-iE_{\slI}t/\hbar}$, as in (9.18); and their coefficients $a_1$ and $a_2$ are given by (10.5), namely, \begin{equation} \label{Eq:III:10:24} \frac{a_1}{a_2}=\frac{H_{12}}{E_{\slI}-H_{11}}. \end{equation} An additional condition is that $a_1$ and $a_2$ should be normalized so that $\abs{a_1}^2+\abs{a_2}^2=1$. We can take $H_{11}$ and $H_{12}$ from (10.22) using \begin{equation*} B_z=B\cos\theta,\quad B_x=B\sin\theta\cos\phi,\quad B_y=B\sin\theta\sin\phi. \end{equation*}
\begin{align*} B_z&=B\cos\theta,\\[.6ex] B_x&=B\sin\theta\cos\phi,\\[.6ex] B_y&=B\sin\theta\sin\phi. \end{align*} So we have \begin{equation} \begin{aligned} H_{11}&=-\mu B\cos\theta,\\[1ex] H_{12}&=-\mu B\sin\theta\,(\cos\phi-i\sin\phi). \end{aligned} \label{Eq:III:10:25} \end{equation} The last factor in the second equation is, incidentally, $e^{-i\phi}$, so it is simpler to write \begin{equation} \label{Eq:III:10:26} H_{12}=-\mu B\sin\theta\,e^{-i\phi}. \end{equation} Using these matrix elements in Eq. (10.24)—and canceling $-\mu B$ from numerator and denominator—we find \begin{equation} \label{Eq:III:10:27} \frac{a_1}{a_2}=\frac{\sin\theta\,e^{-i\phi}}{1-\cos\theta}. \end{equation} With this ratio and the normalization condition, we can find both $a_1$ and $a_2$. That’s not hard, but we can make a short cut with a little trick. Notice that $1-\cos\theta=2\sin^2\,(\theta/2)$, and that $\sin\theta=2\sin\,(\theta/2)\cos\,(\theta/2)$. Then Eq. (10.27) is equivalent to \begin{equation} \label{Eq:III:10:28} \frac{a_1}{a_2}=\frac{\cos\dfrac{\theta}{2}\,e^{-i\phi}} {\sin\dfrac{\theta}{2}}. \end{equation} So one possible answer is \begin{equation} \label{Eq:III:10:29} a_1=\cos\frac{\theta}{2}\,e^{-i\phi},\quad a_2=\sin\frac{\theta}{2}, \end{equation} since it fits with (10.28) and also makes \begin{equation*} \abs{a_1}^2+\abs{a_2}^2=1. \end{equation*} As you know, multiplying both $a_1$ and $a_2$ by an arbitrary phase factor doesn’t change anything. People generally prefer to make Eqs. (10.29) more symmetric by multiplying both by $e^{i\phi/2}$. So the form usually used is \begin{equation} \label{Eq:III:10:30} a_1=\cos\frac{\theta}{2}\,e^{-i\phi/2},\quad a_2=\sin\frac{\theta}{2}\,e^{+i\phi/2}, \end{equation} and this is the answer to our question. The numbers $a_1$ and $a_2$ are the amplitudes to find an electron with its spin up or down along the $z$-axis when we know that its spin is along the axis at $\theta$ and $\phi$. (The amplitudes $C_1$ and $C_2$ are just $a_1$ and $a_2$ times $e^{-iE_{\slI}t/\hbar}$.) Now we notice an interesting thing. The strength $B$ of the magnetic field does not appear anywhere in (10.30). The result is clearly the same in the limit that $B$ goes to zero. This means that we have answered in general the question of how to represent a particle whose spin is along an arbitrary axis. The amplitudes of (10.30) are the projection amplitudes for spin one-half particles corresponding to the projection amplitudes we gave in Chapter 5 [Eqs. (5.38)] for spin-one particles. We can now find the amplitudes for filtered beams of spin one-half particles to go through any particular Stern-Gerlach filter. Let $\ket{+z}$ represent a state with spin up along the $z$-axis, and $\ket{-z}$ represent the spin down state. If $\ket{+z'}$ represents a state with spin up along a $z'$-axis which makes the polar angles $\theta$ and $\phi$ with the $z$-axis, then in the notation of Chapter 5, we have \begin{equation} \label{Eq:III:10:31} \braket{+z}{+z'}=\cos\frac{\theta}{2}\,e^{-i\phi/2},\quad \braket{-z}{+z'}=\sin\frac{\theta}{2}\,e^{+i\phi/2}. \end{equation}
\begin{equation} \begin{aligned} \braket{+z}{+z'}&=\cos\frac{\theta}{2}\,e^{-i\phi/2},\\[1.5ex] \braket{-z}{+z'}&=\sin\frac{\theta}{2}\,e^{+i\phi/2}. \end{aligned} \label{Eq:III:10:31} \end{equation} These results are equivalent to what we found in Chapter 6, Eq. (6.36), by purely geometrical arguments. (So if you decided to skip Chapter 6, you now have the essential results anyway.) As our final example lets look again at one which we’ve already mentioned a number of times. Suppose that we consider the following problem. We start with an electron whose spin is in some given direction, then turn on a magnetic field in the $z$-direction for $25$ minutes, and then turn it off. What is the final state? Again let’s represent the state by the linear combination $\ket{\psi}=\ketsl{\slOne}C_1+\ketsl{\slTwo}C_2$. For this problem, however, the states of definite energy are also our base states $\ketsl{\slOne}$ and $\ketsl{\slTwo}$. So $C_1$ and $C_2$ only vary in phase. We know that \begin{equation*} C_1(t)=C_1(0)e^{-iE_{\slI}t/\hbar}=C_1(0)e^{+i\mu Bt/\hbar}, \end{equation*} and \begin{equation*} C_2(t)=C_2(0)e^{-iE_{\slII}t/\hbar}=C_2(0)e^{-i\mu Bt/\hbar}. \end{equation*} Now initially we said the electron spin was set in a given direction. That means that initially $C_1$ and $C_2$ are two numbers given by Eqs. (10.30). After we wait for a period of time $T$, the new $C_1$ and $C_2$ are the same two numbers multiplied respectively by $e^{i\mu B_zT/\hbar}$ and $e^{-i\mu B_zT/\hbar}$. What state is that? That’s easy. It’s exactly the same as if the angle $\phi$ had been changed by the subtraction of $2\mu B_zT/\hbar$ and the angle $\theta$ had been left unchanged. That means that at the end of the time $T$, the state $\ket{\psi}$ represents an electron lined up in a direction which differs from the original direction only by a rotation about the $z$-axis through the angle $\Delta\phi=2\mu B_zT/\hbar$. Since this angle is proportional to $T$, we can also say the direction of the spin precesses at the angular velocity $2\mu B_z/\hbar$ around the $z$-axis. This result we discussed several times previously in a less complete and rigorous manner. Now we have obtained a complete and accurate quantum mechanical description of the precession of atomic magnets. It is interesting that the mathematical ideas we have just gone over for the spinning electron in a magnetic field can be applied to any two-state system. That means that by making a mathematical analogy to the spinning electron, any problem about two-state systems can be solved by pure geometry. It works like this. First you shift the zero of energy so that $(H_{11}+H_{22})$ is equal to zero so that $H_{11}=-H_{22}$. Then any two-state problem is formally the same as the electron in a magnetic field. All you have to do is identify $-\mu B_z$ with $H_{11}$ and $-\mu(B_x-iB_y)$ with $H_{12}$. No matter what the physics is originally—an ammonia molecule, or whatever—you can translate it into a corresponding electron problem. So if we can solve the electron problem in general, we have solved all two-state problems. And we have the general solution for the electron! Suppose you have some state to start with that has spin “up” in some direction, and you have a magnetic field $\FLPB$ that points in some other direction. You just rotate the spin direction around the axis of $\FLPB$ with the vector angular velocity $\FLPomega(t)$ equal to a constant times the vector $\FLPB$ (namely $\FLPomega=2\mu\FLPB/\hbar$). As $\FLPB$ varies with time, you keep moving the axis of the rotation to keep it parallel with $\FLPB$, and keep changing the speed of rotation so that it is always proportional to the strength of $\FLPB$. See Fig. 10–11. If you keep doing this, you will end up with a certain final orientation of the spin axis, and the amplitudes $C_1$ and $C_2$ are just given by the projections—using (10.30)—into your coordinate frame. You see, it’s just a geometric problem to keep track of where you end up after all the rotating. Although it’s easy to see what’s involved, this geometric problem (of finding the net result of a rotation with a varying angular velocity vector) is not easy to solve explicitly in the general case. Anyway, we see, in principle, the general solution to any two-state problem. In the next chapter we will look some more into the mathematical techniques for handling the important case of a spin one-half particle—and, therefore, for handling two-state systems in general. |
|
3 | 11 | More Two-State Systems | 1 | The Pauli spin matrices | We continue our discussion of two-state systems. At the end of the last chapter we were talking about a spin one-half particle in a magnetic field. We described the spin state by giving the amplitude $C_1$ that the $z$-component of spin angular momentum is $+\hbar/2$ and the amplitude $C_2$ that it is $-\hbar/2$. In earlier chapters we have called these base states $\ket{+}$ and $\ket{-}$. We will now go back to that notation, although we may occasionally find it convenient to use $\ket{+}$ or $\ketsl{\slOne}$, and $\ket{-}$ or $\ketsl{\slTwo}$, interchangeably. We saw in the last chapter that when a spin one-half particle with a magnetic moment $\mu$ is in a magnetic field $\FLPB=(B_x,B_y,B_z)$, the amplitudes $C_+$ ($=C_1$) and $C_-$ ($=C_2$) are connected by the following differential equations: \begin{equation} \begin{aligned} i\hbar\,\ddt{C_+}{t}&=-\mu[B_zC_+\!+(B_x\!-iB_y)C_-],\\[2ex] i\hbar\,\ddt{C_-}{t}&=-\mu[(B_x\!+iB_y)C_+\!-\!B_zC_-]. \end{aligned} \label{Eq:III:11:1} \end{equation} In other words, the Hamiltonian matrix $H_{ij}$ is \begin{equation} \begin{alignedat}{2} H_{11}&=\!-\mu B_z,&\quad H_{12}&=\!-\mu(B_x\!-iB_y),\\[1ex] H_{21}&=\!-\mu(B_x\!+iB_y),&\quad H_{22}&=\!+\mu B_z. \end{alignedat} \label{Eq:III:11:2} \end{equation} And Eqs. (11.1) are, of course, the same as \begin{equation} \label{Eq:III:11:3} i\hbar\,\ddt{C_i}{t}=\sum_jH_{ij}C_{j}, \end{equation} where $i$ and $j$ take on the values $+$ and $-$ (or $1$ and $2$). The two-state system of the electron spin is so important that it is very useful to have a neater way of writing things. We will now make a little mathematical digression to show you how people usually write the equations of a two-state system. It is done this way: First, note that each term in the Hamiltonian is proportional to $\mu$ and to some component of $\FLPB$; we can then—purely formally—write that \begin{equation} \label{Eq:III:11:4} H_{ij}=-\mu[\sigma_{ij}^xB_x+\sigma_{ij}^yB_y+\sigma_{ij}^zB_z]. \end{equation} There is no new physics here; this equation just means that the coefficients $\sigma_{ij}^x$, $\sigma_{ij}^y$, and $\sigma_{ij}^z$—there are $4\times3=12$ of them—can be figured out so that (11.4) is identical with (11.2). Let’s see what they have to be. We start with $B_z$. Since $B_z$ appears only in $H_{11}$ and $H_{22}$, everything will be O.K. if \begin{alignat*}{2} \sigma_{11}^z&=1,&\quad \sigma_{12}^z&=0,\\[2ex] \sigma_{21}^z&=0,&\quad \sigma_{22}^z&=-1. \end{alignat*} We often write the matrix $H_{ij}$ as a little table like this: \begin{equation*} H_{ij}= \!\!\!\raise 10 pt {\scriptstyle i\downarrow \,\raise 10 pt \scriptstyle j\rightarrow}\kern -13pt % ebook remove % ebook insert: \raise{12pt}{\scriptstyle i\downarrow \,\raise{16pt}{\scriptstyle j\rightarrow}\kern -20pt}} \begin{pmatrix} H_{11} & H_{12}\\[1ex] H_{21} & H_{22} \end{pmatrix}. \end{equation*} For the Hamiltonian of a spin one-half particle in the magnetic field $B_z$, this is the same as \begin{equation*} H_{ij}= \!\!\!\raise 12 pt {\scriptstyle i\downarrow \,\raise 7 pt \scriptstyle j\rightarrow}\kern -13pt % ebook remove % ebook insert: \raise{12pt}{\scriptstyle i\downarrow \,\raise{16pt}{\scriptstyle j\rightarrow}\kern -20pt} \begin{pmatrix} -\mu B_z & -\mu(B_x-iB_y)\\[1ex] -\mu(B_x+iB_y) & +\mu B_z \end{pmatrix}. \end{equation*} In the same way, we can write the coefficients $\sigma_{ij}^z$ as the matrix \begin{equation} \label{Eq:III:11:5} \sigma_{ij}^z= \raise 10 pt {\scriptstyle i\downarrow \,\raise 7 pt \scriptstyle j\rightarrow}\kern -13pt % ebook remove % ebook insert: \raise{12pt}{\scriptstyle i\downarrow \,\raise{12pt}{\scriptstyle j\rightarrow}\kern -20pt} \begin{pmatrix} 1 & \phantom{-}0\\ 0 & -1 \end{pmatrix}. \end{equation} Working with the coefficients of $B_x$, we get that the terms of $\sigma_x$ have to be \begin{equation*} \begin{alignedat}{2} \sigma_{11}^x&=0,&\quad \sigma_{12}^x&=1,\\[2ex] \sigma_{21}^x&=1,&\quad \sigma_{22}^x&=0. \end{alignedat} \end{equation*} Or, in shorthand, \begin{equation} \label{Eq:III:11:6} \sigma_{ij}^x= \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}. \end{equation} Finally, looking at $B_y$, we get \begin{equation*} \begin{alignedat}{2} \sigma_{11}^y&=0,&\quad \sigma_{12}^y&=-i,\\[2ex] \sigma_{21}^y&=i,&\quad \sigma_{22}^y&=0; \end{alignedat} \end{equation*} or \begin{equation} \label{Eq:III:11:7} \sigma_{ij}^y= \begin{pmatrix} 0 & -i\\ i & \phantom{-}0 \end{pmatrix}. \end{equation} With these three sigma matrices, Eqs. (11.2) and (11.4) are identical. To leave room for the subscripts $i$ and $j$, we have shown which $\sigma$ goes with which component of $\FLPB$ by putting $x$, $y$, and $z$ as superscripts. Usually, however, the $i$ and $j$ are omitted—it’s easy to imagine they are there—and the $x$, $y$, $z$ are written as subscripts. Then Eq. (11.4) is written \begin{equation} \label{Eq:III:11:8} H=-\mu[\sigma_xB_x+\sigma_yB_y+\sigma_zB_z]. \end{equation} Because the sigma matrices are so important—they are used all the time by the professionals—we have gathered them together in Table 11–1. (Anyone who is going to work in quantum physics really has to memorize them.) They are also called the Pauli spin matrices after the physicist who invented them. In the table we have included one more two-by-two matrix which is needed if we want to be able to take care of a system which has two spin states of the same energy, or if we want to choose a different zero energy. For such situations we must add $E_0C_+$ to the first equation in (11.1) and $E_0C_-$ to the second equation. We can include this in the new notation if we define the unit matrix “$1$” as $\delta_{ij}$, \begin{equation} \label{Eq:III:11:9} 1=\delta_{ij}= \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix}, \end{equation} and rewrite Eq. (11.8) as \begin{equation} \label{Eq:III:11:10} H=E_0\delta_{ij}-\mu(\sigma_xB_x+\sigma_yB_y+\sigma_zB_z). \end{equation} Usually, it is understood that any constant like $E_0$ is automatically to be multiplied by the unit matrix; then one writes simply \begin{equation} \label{Eq:III:11:11} H=E_0-\mu(\sigma_xB_x+\sigma_yB_y+\sigma_zB_z). \end{equation} One reason the spin matrices are useful is that any two-by-two matrix at all can be written in terms of them. Any matrix you can write has four numbers in it, say, \begin{equation*} M= \begin{pmatrix} a & b\\ c & d \end{pmatrix}. \end{equation*} It can always be written as a linear combination of four matrices. For example, \begin{equation*} M=a\! \begin{pmatrix} 1&0\\ 0&0 \end{pmatrix} \!+b\! \begin{pmatrix} 0&1\\ 0&0 \end{pmatrix} \!+c\! \begin{pmatrix} 0&0\\ 1&0 \end{pmatrix} \!+d\! \begin{pmatrix} 0&0\\ 0&1 \end{pmatrix}\!. \end{equation*} There are many ways of doing it, but one special way is to say that $M$ is a certain amount of $\sigma_x$, plus a certain amount of $\sigma_y$, and so on, like this: \begin{equation*} M=\alpha1+\beta\sigma_x+\gamma\sigma_y+\delta\sigma_z, \end{equation*} where the “amounts” $\alpha$, $\beta$, $\gamma$, and $\delta$ may, in general, be complex numbers. Since any two-by-two matrix can be represented in terms of the unit matrix and the sigma matrices, we have all that we ever need for any two-state system. No matter what the two-state system—the ammonia molecule, the magenta dye, anything—the Hamiltonian equation can be written in terms of the sigmas. Although the sigmas seem to have a geometrical significance in the physical situation of an electron in a magnetic field, they can also be thought of as just useful matrices, which can be used for any two-state problem. For instance, in one way of looking at things a proton and a neutron can be thought of as the same particle in either of two states. We say the nucleon (proton or neutron) is a two-state system—in this case, two states with respect to its charge. When looked at that way, the $\ketsl{\slOne}$ state can represent the proton and the $\ketsl{\slTwo}$ state can represent the neutron. People say that the nucleon has two “isotopic-spin” states. Since we will be using the sigma matrices as the “arithmetic” of the quantum mechanics of two-state systems, let’s review quickly the conventions of matrix algebra. By the “sum” of any two or more matrices we mean just what was obvious in Eq. (11.4). In general, if we “add” two matrices $A$ and $B$, the “sum” $C$ means that each term $C_{ij}$ is given by \begin{equation*} C_{ij}=A_{ij}+B_{ij}. \end{equation*} Each term of $C$ is the sum of the terms in the same slots of $A$ and $B$. In Section 5–6 we have already encountered the idea of a matrix “product.” The same idea will be useful in dealing with the sigma matrices. In general, the “product” of two matrices $A$ and $B$ (in that order) is defined to be a matrix $C$ whose elements are \begin{equation} \label{Eq:III:11:12} C_{ij}=\sum_kA_{ik}B_{kj}. \end{equation} It is the sum of products of terms taken in pairs from the $i$th row of $A$ and the $j$th column of $B$. If the matrices are written out in tabular form as in Fig. 11-1, there is a good “system” for getting the terms of the product matrix. Suppose you are calculating $C_{23}$. You run your left index finger along the second row of $A$ and your right index finger down the third column of $B$, multiplying each pair and adding as you go. We have tried to indicate how to do it in the figure. It is, of course, particularly simple for two-by-two matrices. For instance, if we multiply $\sigma_x$ times $\sigma_x$, we get \begin{equation*} \sigma_x^2=\sigma_x\cdot\sigma_x= \begin{pmatrix} 0&1\\ 1&0 \end{pmatrix} \cdot \begin{pmatrix} 0&1\\ 1&0 \end{pmatrix} = \begin{pmatrix} 1&0\\ 0&1 \end{pmatrix}, \end{equation*} which is just the unit matrix $1$. Or, for another example, let’s work out $\sigma_x\sigma_y$: \begin{equation*} \sigma_x\sigma_y= \begin{pmatrix} 0&1\\ 1&0 \end{pmatrix} \cdot \begin{pmatrix} 0&-i\\ i&\phantom{-}0 \end{pmatrix} = \begin{pmatrix} i&\phantom{-}0\\ 0&-i \end{pmatrix}. \end{equation*} Referring to Table 11–1, you see that the product is just $i$ times the matrix $\sigma_z$. (Remember that a number times a matrix just multiplies each term of the matrix.) Since the products of the sigmas taken two at a time are important—as well as rather amusing—we have listed them all in Table 11–2. You can work them out as we have done for $\sigma_x^2$ and $\sigma_x\sigma_y$. There’s another very important and interesting point about these $\sigma$ matrices. We can imagine, if we wish, that the three matrices $\sigma_x$, $\sigma_y$, and $\sigma_z$ are analogous to the three components of a vector—it is sometimes called the “sigma vector” and is written $\FLPsigma$. It is really a “matrix vector” or a “vector matrix.” It is three different matrices—one matrix associated with each axis, $x$, $y$, and $z$. With it, we can write the Hamiltonian of the system in a nice form which works in any coordinate system: \begin{equation} \label{Eq:III:11:13} H=-\mu\FLPsigma\cdot\FLPB. \end{equation} Although we have written our three matrices in the representation in which “up” and “down” are in the $z$-direction—so that $\sigma_z$ has a particular simplicity—we could figure out what the matrices would look like in some other representation. Although it takes a lot of algebra, you can show that they change among themselves like the components of a vector. (We won’t, however, worry about proving it right now. You can check it if you want.) You can use $\FLPsigma$ in different coordinate systems as though it is a vector. You remember that the $H$ is related to energy in quantum mechanics. It is, in fact, just equal to the energy in the simple situation where there is only one state. Even for two-state systems of the electron spin, when we write the Hamiltonian as in Eq. (11.13), it looks very much like the classical formula for the energy of a little magnet with magnetic moment $\FLPmu$ in a magnetic field $\FLPB$. Classically, we would say \begin{equation} \label{Eq:III:11:14} U=-\FLPmu\cdot\FLPB, \end{equation} where $\FLPmu$ is the property of the object and $\FLPB$ is an external field. We can imagine that Eq. (11.14) can be converted to (11.13) if we replace the classical energy by the Hamiltonian and the classical $\FLPmu$ by the matrix $\mu\FLPsigma$. Then, after this purely formal substitution, we interpret the result as a matrix equation. It is sometimes said that to each quantity in classical physics there corresponds a matrix in quantum mechanics. It is really more correct to say that the Hamiltonian matrix corresponds to the energy, and any quantity that can be defined via energy has a corresponding matrix. For example, the magnetic moment can be defined via energy by saying that the energy in an external field $\FLPB$ is $-\FLPmu\cdot\FLPB$. This defines the magnetic moment vector $\FLPmu$. Then we look at the formula for the Hamiltonian of a real (quantum) object in a magnetic field and try to identify whatever the matrices are that correspond to the various quantities in the classical formula. That’s the trick by which sometimes classical quantities have their quantum counterparts. You may try, if you want, to understand how a classical vector is equal to a matrix $\mu\FLPsigma$, and maybe you will discover something—but don’t break your head on it. That’s not the idea—they are not equal. Quantum mechanics is a different kind of a theory to represent the world. It just happens that there are certain correspondences which are hardly more than mnemonic devices—things to remember with. That is, you remember Eq. (11.14) when you learn classical physics; then if you remember the correspondence $\FLPmu\to\mu\FLPsigma$, you have a handle for remembering Eq. (11.13). Of course, nature knows the quantum mechanics, and the classical mechanics is only an approximation; so there is no mystery in the fact that in classical mechanics there is some shadow of quantum mechanical laws—which are truly the ones underneath. To reconstruct the original object from the shadow is not possible in any direct way, but the shadow does help you to remember what the object looks like. Equation (11.13) is the truth, and Eq. (11.14) is the shadow. Because we learn classical mechanics first, we would like to be able to get the quantum formula from it, but there is no sure-fire scheme for doing that. We must always go back to the real world and discover the correct quantum mechanical equations. When they come out looking like something in classical physics, we are in luck. If the warnings above seem repetitious and appear to you to be belaboring self-evident truths about the relation of classical physics to quantum physics, please excuse the conditioned reflexes of a professor who has usually taught quantum mechanics to students who hadn’t heard about Pauli spin matrices until they were in graduate school. Then they always seemed to be hoping that, somehow, quantum mechanics could be seen to follow as a logical consequence of classical mechanics which they had learned thoroughly years before. (Perhaps they wanted to avoid having to learn something new.) You have learned the classical formula, Eq. (11.14), only a few months ago—and then with warnings that it was inadequate—so maybe you will not be so unwilling to take the quantum formula, Eq. (11.13), as the basic truth. |
|
3 | 11 | More Two-State Systems | 2 | The spin matrices as operators | While we are on the subject of mathematical notation, we would like to describe still another way of writing things—a way which is used very often because it is so compact. It follows directly from the notation introduced in Chapter 8. If we have a system in a state $\ket{\psi(t)}$, which varies with time, we can—as we did in Eq. (8.34)—write the amplitude that the system would be in the state $\ket{i}$ at $t+\Delta t$ as \begin{equation*} \braket{i}{\psi(t+\Delta t)}=\sum_j \bracket{i}{U(t+\Delta t,t)}{j} \braket{j}{\psi(t)} \end{equation*} The matrix element $\bracket{i}{U(t+\Delta t,t)}{j}$ is the amplitude that the base state $\ket{j}$ will be converted into the base state $\ket{i}$ in the time interval $\Delta t$. We then defined $H_{ij}$ by writing \begin{equation*} \bracket{i}{U(t+\Delta t,t)}{j}=\delta_{ij}-\frac{i}{\hbar}\, H_{ij}(t)\,\Delta t, \end{equation*} and we showed that the amplitudes $C_i(t)=\braket{i}{\psi(t)}$ were related by the differential equations \begin{equation} \label{Eq:III:11:15} i\hbar\,\ddt{C_i}{t}=\sum_jH_{ij}C_j. \end{equation} If we write out the amplitudes $C_i$ explicitly, the same equation appears as \begin{equation} \label{Eq:III:11:16} i\hbar\,\ddt{}{t}\,\braket{i}{\psi}=\sum_jH_{ij}\braket{j}{\psi}. \end{equation} Now the matrix elements $H_{ij}$ are also amplitudes which we can write as $\bracket{i}{H}{j}$; our differential equation looks like this: \begin{equation} \label{Eq:III:11:17} i\hbar\,\ddt{}{t}\,\braket{i}{\psi}= \sum_j\bracket{i}{H}{j}\braket{j}{\psi}. \end{equation} We see that $-i/\hbar\,\bracket{i}{H}{j}\,dt$ is the amplitude that—under the physical conditions described by $H$—a state $\ket{j}$ will, during the time $dt$, “generate” the state $\ket{i}$. (All of this is implicit in the discussion of Section 8–4.) Now following the ideas of Section 8–2, we can drop out the common term $\bra{i}$ in Eq. (11.17)—since it is true for any state $\ket{i}$—and write that equation simply as \begin{equation} \label{Eq:III:11:18} i\hbar\,\ddt{}{t}\,\ket{\psi}= \sum_jH\,\ket{j}\braket{j}{\psi}. \end{equation} Or, going one step further, we can also remove the $j$ and write \begin{equation} \label{Eq:III:11:19} i\hbar\,\ddt{}{t}\,\ket{\psi}=H\,\ket{\psi}. \end{equation} In Chapter 8 we pointed out that when things are written this way, the $H$ in $H\,\ket{j}$ or $H\,\ket{\psi}$ is called an operator. From now on we will put the little hat ($\op{\enspace}$) over an operator to remind you that it is an operator and not just a number. We will write $\Hop\,\ket{\psi}$. Although the two equations (11.18) and (11.19) mean exactly the same thing as Eq. (11.17) or Eq. (11.15), we can think about them in a different way. For instance, we would describe Eq. (11.18) in this way: “The time derivative of the state vector $\ket{\psi}$ times $i\hbar$ is equal to what you get by operating with the Hamiltonian operator $\Hop$ on each base state, multiplying by the amplitude $\braket{j}{\psi}$ that $\psi$ is in the state $j$, and summing over all $j$.” Or Eq. (11.19) is described this way. “The time derivative (times $i\hbar$) of a state $\ket{\psi}$ is equal to what you get if you operate with the Hamiltonian $\Hop$ on the state vector $\ket{\psi}$.” It’s just a shorthand way of saying what is in Eq. (11.17), but, as you will see, it can be a great convenience. If we wish, we can carry the “abstraction” idea one more step. Equation (11.19) is true for any state $\ket{\psi}$. Also the left-hand side, $i\hbar\,d/dt$, is also an operator—it’s the operation “differentiate by $t$ and multiply by $i\hbar$.” Therefore, Eq. (11.19) can also be thought of as an equation between operators—the operator equation \begin{equation*} i\hbar\,\ddt{}{t}=\Hop. \end{equation*} The Hamiltonian operator (within a constant) produces the same result as does $d/dt$ when acting on any state. Remember that this equation—as well as Eq. (11.19)—is not a statement that the $\Hop$ operator is just the identical operation as $i\hbar\,d/dt$. The equations are the dynamical law of nature—the law of motion—for a quantum system. Just to get some practice with these ideas, we will show you another way we could get to Eq. (11.18). You know that we can write any state $\ket{\psi}$ in terms of its projections into some base set [see Eq. (8.8)], \begin{equation} \label{Eq:III:11:20} \ket{\psi}=\sum_i\ket{i}\braket{i}{\psi}. \end{equation} How does $\ket{\psi}$ change with time? Well, just take its derivative: \begin{equation} \label{Eq:III:11:21} \ddt{}{t}\,\ket{\psi}=\ddt{}{t}\sum_i\ket{i}\braket{i}{\psi}. \end{equation} Now the base states $\ket{i}$ do not change with time (at least we are always taking them as definite fixed states), but the amplitudes $\braket{i}{\psi}$ are numbers which may vary. So Eq. (11.21) becomes \begin{equation} \label{Eq:III:11:22} \ddt{}{t}\,\ket{\psi}=\sum_i\ket{i}\,\ddt{}{t}\,\braket{i}{\psi}. \end{equation} Since we know $d\braket{i}{\psi}/dt$ from Eq. (11.16), we get \begin{align*} \ddt{}{t}\,\ket{\psi}&=-\frac{i}{\hbar} \sum_i\ket{i}\sum_jH_{ij}\braket{j}{\psi}\\[1ex] &=-\frac{i}{\hbar}\sum_{ij}\ket{i}\bracket{i}{H}{j}\braket{j}{\psi}= -\frac{i}{\hbar}\sum_jH\,\ket{j}\braket{j}{\psi}. \end{align*}
\begin{align*} \ddt{}{t}\,\ket{\psi}&=-\frac{i}{\hbar} \sum_i\ket{i}\sum_jH_{ij}\braket{j}{\psi}\\[1ex] &=-\frac{i}{\hbar}\sum_{ij}\ket{i}\bracket{i}{H}{j}\braket{j}{\psi}\\[1ex] &=-\frac{i}{\hbar}\sum_jH\,\ket{j}\braket{j}{\psi}. \end{align*} This is Eq. (11.18) all over again. So we have many ways of looking at the Hamiltonian. We can think of the set of coefficients $H_{ij}$ as just a bunch of numbers, or we can think of the “amplitudes” $\bracket{i}{H}{j}$, or we can think of the “matrix” $H_{ij}$, or we can think of the “operator” $\Hop$. It all means the same thing. Now let’s go back to our two-state systems. If we write the Hamiltonian in terms of the sigma matrices (with suitable numerical coefficients like $B_x$, etc.), we can clearly also think of $\sigma_{ij}^x$ as an amplitude $\bracket{i}{\sigma_x}{j}$ or, for short, as the operator $\sigmaop_x$. If we use the operator idea, we can write the equation of motion of a state $\ket{\psi}$ in a magnetic field as \begin{equation} \label{Eq:III:11:23} i\hbar\,\ddt{}{t}\,\ket{\psi}= -\mu(B_x\sigmaop_x+B_y\sigmaop_y+B_z\sigmaop_z)\,\ket{\psi}. \end{equation} When we want to “use” such an equation we will normally have to express $\ket{\psi}$ in terms of base vectors (just as we have to find the components of space vectors when we want specific numbers). So we will usually want to put Eq. (11.23) in the somewhat expanded form: \begin{equation} \label{Eq:III:11:24} i\hbar\ddt{}{t}\ket{\psi}\!=\!-\mu\! \sum_i(B_x\sigmaop_x\!+\!B_y\sigmaop_y\!+\!B_z\sigmaop_z)\ket{i} \braket{i}{\psi}. \end{equation} Now you will see why the operator idea is so neat. To use Eq. (11.24) we need to know what happens when the $\sigmaop$ operators work on each of the base states. Let’s find out. Suppose we have $\sigmaop_z\,\ket{+}$; it is some vector $\ket{?}$, but what? Well, let’s multiply it on the left by $\bra{+}$; we have \begin{equation*} \bracket{+}{\sigmaop_z}{+}=\sigma_{11}^z=1 \end{equation*} (using Table 11–1). So we know that \begin{equation} \label{Eq:III:11:25} \braket{+}{?}=1. \end{equation} Now let’s multiply $\sigmaop_z\,\ket{+}$ on the left by $\bra{-}$. We get \begin{equation*} \bracket{-}{\sigmaop_z}{+}=\sigma_{21}^z=0; \end{equation*} so \begin{equation} \label{Eq:III:11:26} \braket{-}{?}=0. \end{equation} There is only one state vector that satisfies both (11.25) and (11.26); it is $\ket{+}$. We discover then that \begin{equation} \label{Eq:III:11:27} \sigmaop_z\,\ket{+}=\ket{+}. \end{equation} By this kind of argument you can easily show that all of the properties of the sigma matrices can be described in the operator notation by the set of rules given in Table 11–3. If we have products of sigma matrices, they go over into products of operators. When two operators appear together as a product, you carry out first the operation with the operator which is farthest to the right. For instance, by $\sigmaop_x\sigmaop_y\,\ket{+}$ we are to understand $\sigmaop_x(\sigmaop_y\,\ket{+})$. From Table 11–3, we get $\sigmaop_y\,\ket{+}=i\,\ket{-}$, so \begin{equation} \label{Eq:III:11:28} \sigmaop_x\sigmaop_y\,\ket{+}=\sigmaop_x(i\,\ket{-}). \end{equation} Now any number—like $i$—just moves through an operator (operators work only on state vectors); so Eq. (11.28) is the same as \begin{equation*} \sigmaop_x\sigmaop_y\,\ket{+}=i\sigmaop_x\,\ket{-}=i\,\ket{+}. \end{equation*} If you do the same thing for $\sigmaop_x\sigmaop_y\,\ket{-}$, you will find that \begin{equation*} \sigmaop_x\sigmaop_y\,\ket{-}=-i\,\ket{-}. \end{equation*} Looking at Table 11–3, you see that $\sigmaop_x\sigmaop_y$ operating on $\ket{+}$ or $\ket{-}$ gives just what you get if you operate with $\sigmaop_z$ and multiply by $i$. We can, therefore, say that the operation $\sigmaop_x\sigmaop_y$ is identical with the operation $i\sigmaop_z$ and write this statement as an operator equation: \begin{equation} \label{Eq:III:11:29} \sigmaop_x\sigmaop_y=i\sigmaop_z. \end{equation} Notice that this equation is identical with one of our matrix equations of Table 11–2. So again we see the correspondence between the matrix and operator points of view. Each of the equations in Table 11–2 can, therefore, also be considered as equations about the sigma operators. You can check that they do indeed follow from Table 11–3. It is best, when working with these things, not to keep track of whether a quantity like $\sigma$ or $H$ is an operator or a matrix. All the equations are the same either way, so Table 11–2 is for sigma operators, or for sigma matrices, as you wish. |
|
3 | 11 | More Two-State Systems | 3 | The solution of the two-state equations | We can now write our two-state equation in various forms, for example, either as \begin{equation*} i\hbar\,\ddt{C_i}{t}=\sum_jH_{ij}C_j \end{equation*} or \begin{equation} \label{Eq:III:11:30} i\hbar\,\ddt{\,\ket{\psi}}{t}=\Hop\,\ket{\psi}. \end{equation} They both mean the same thing. For a spin one-half particle in a magnetic field, the Hamiltonian $H$ is given by Eq. (11.8) or by Eq. (11.13). If the field is in the $z$-direction, then—as we have seen several times by now—the solution is that the state $\ket{\psi}$, whatever it is, precesses around the $z$-axis (just as if you were to take the physical object and rotate it bodily around the $z$-axis) at an angular velocity equal to twice the magnetic field times $\mu/\hbar$. The same is true, of course, for a magnetic field along any other direction, because the physics is independent of the coordinate system. If we have a situation where the magnetic field varies from time to time in a complicated way, then we can analyze the situation in the following way. Suppose you start with the spin in the $+z$-direction and you have an $x$-magnetic field. The spin starts to turn. Then if the $x$-field is turned off, the spin stops turning. Now if a $z$-field is turned on, the spin precesses about $z$, and so on. So depending on how the fields vary in time, you can figure out what the final state is—along what axis it will point. Then you can refer that state back to the original $\ket{+}$ and $\ket{-}$ with respect to $z$ by using the projection formulas we had in Chapter 10 (or Chapter 6). If the state ends up with its spin in the direction $(\theta,\phi)$, it will have an up-amplitude $\cos\,(\theta/2)e^{-i\phi/2}$ and a down-amplitude $\sin\,(\theta/2)e^{+i\phi/2}$. That solves any problem. It is a word description of the solution of the differential equations. The solution just described is sufficiently general to take care of any two-state system. Let’s take our example of the ammonia molecule—including the effects of an electric field. If we describe the system in terms of the states $\ketsl{\slI}$ and $\ketsl{\slII}$, the equations (9.38) and (9.39) look like this: \begin{equation} \begin{aligned} i\hbar\,\ddt{C_{\slI}}{t}&= +AC_{\slI}+\mu\Efield C_{\slII},\\[2ex] i\hbar\,\ddt{C_{\slII}}{t}&= -AC_{\slII}+\mu\Efield C_{\slI}. \end{aligned} \label{Eq:III:11:31} \end{equation} You say, “No, I remember there was an $E_0$ in there.” Well, we have shifted the origin of energy to make the $E_0$ zero. (You can always do that by changing both amplitudes by the same factor—$e^{iE_0t/\hbar}$—and get rid of any constant energy.) Now if corresponding equations always have the same solutions, then we really don’t have to do it twice. If we look at these equations and look at Eq. (11.1), then we can make the following identification. Let’s call $\ketsl{\slI}$ the state $\ket{+}$ and $\ketsl{\slII}$ the state $\ket{-}$. That does not mean that we are lining-up the ammonia in space, or that $\ket{+}$ and $\ket{-}$ has anything to do with the $z$-axis. It is purely artificial. We have an artificial space that we might call the “ammonia molecule representative space,” or something—a three-dimensional “diagram” in which being “up” corresponds to having the molecule in the state $\ketsl{\slI}$ and being “down” along this false $z$-axis represents having a molecule in the state $\ketsl{\slII}$. Then, the equations will be identified as follows. First of all, you see that the Hamiltonian can be written in terms of the sigma matrices as \begin{equation} \label{Eq:III:11:32} H=+A\sigma_z+\mu\Efield\sigma_x. \end{equation} Or, putting it another way, $\mu B_z$ in Eq. (11.1) corresponds to $-A$ in Eq. (11.32), and $\mu B_x$ corresponds to $-\mu\Efield$. In our “model” space, then, we have a constant $B$ field along the $z$-direction. If we have an electric field $\Efield$ which is changing with time, then we have a $B$ field along the $x$-direction which varies in proportion. So the behavior of an electron in a magnetic field with a constant component in the $z$-direction and an oscillating component in the $x$-direction is mathematically analogous and corresponds exactly to the behavior of an ammonia molecule in an oscillating electric field. Unfortunately, we do not have the time to go any further into the details of this correspondence, or to work out any of the technical details. We only wished to make the point that all systems of two states can be made analogous to a spin one-half object precessing in a magnetic field. |
|
3 | 11 | More Two-State Systems | 4 | The polarization states of the photon | There are a number of other two-state systems which are interesting to study, and the first new one we would like to talk about is the photon. To describe a photon we must first give its vector momentum. For a free photon, the frequency is determined by the momentum, so we don’t have to say also what the frequency is. After that, though, we still have a property called the polarization. Imagine that there is a photon coming at you with a definite monochromatic frequency (which will be kept the same throughout all this discussion so that we don’t have a variety of momentum states). Then there are two directions of polarization. In the classical theory, light can be described as having an electric field which oscillates horizontally or an electric field which oscillates vertically (for instance); these two kinds of light are called $x$-polarized and $y$-polarized light. The light can also be polarized in some other direction, which can be made up from the superposition of a field in the $x$-direction and one in the $y$-direction. Or if you take the $x$- and the $y$-components out of phase by $90^\circ$, you get an electric field that rotates—the light is elliptically polarized. (This is just a quick reminder of the classical theory of polarized light that we studied in Chapter 33, Vol. I.) Now, however, suppose we have a single photon—just one. There is no electric field that we can discuss in the same way. All we have is one photon. But a photon has to have the analog of the classical phenomena of polarization. There must be at least two different kinds of photons. At first, you might think there should be an infinite variety—after all, the electric vector can point in all sorts of directions. We can, however, describe the polarization of a photon as a two-state system. A photon can be in the state $\ket{x}$ or in the state $\ket{y}$. By $\ket{x}$ we mean the polarization state of each one of the photons in a beam of light which classically is $x$-polarized light. On the other hand, by $\ket{y}$ we mean the polarization state of each of the photons in a $y$-polarized beam. And we can take $\ket{x}$ and $\ket{y}$ as our base states of a photon of given momentum pointing at you—in what we will call the $z$-direction. So there are two base states $\ket{x}$ and $\ket{y}$, and they are all that are needed to describe any photon at all. For example, if we have a piece of polaroid set with its axis to pass light polarized in what we call the $x$-direction, and we send in a photon which we know is in the state $\ket{y}$, it will be absorbed by the polaroid. If we send in a photon which we know is in the state $\ket{x}$, it will come right through as $\ket{x}$. If we take a piece of calcite which takes a beam of polarized light and splits it into an $\ket{x}$ beam and a $\ket{y}$ beam, that piece of calcite is the complete analog of a Stern-Gerlach apparatus which splits a beam of silver atoms into the two states $\ket{+}$ and $\ket{-}$. So everything we did before with particles and Stern-Gerlach apparatuses, we can do again with light and pieces of calcite. And what about light filtered through a piece of polaroid set at an angle $\theta$? Is that another state? Yes, indeed, it is another state. Let’s call the axis of the polaroid $x'$ to distinguish it from the axes of our base states. See Fig. 11-2. A photon that comes out will be in the state $\ket{x'}$. However, any state can be represented as a linear combination of base states, and the formula for the combination is, here, \begin{equation} \label{Eq:III:11:33} \ket{x'}=\cos\theta\,\ket{x}+\sin\theta\,\ket{y}. \end{equation} That is, if a photon comes through a piece of polaroid set at the angle $\theta$ (with respect to $x$), it can still be resolved into $\ket{x}$ and $\ket{y}$ beams—by a piece of calcite, for example. Or you can, if you wish, just analyze it into $x$- and $y$-components in your imagination. Either way, you will find the amplitude $\cos\theta$ to be in the $\ket{x}$ state and the amplitude $\sin\theta$ to be in the $\ket{y}$ state. Now we ask this question: Suppose a photon is polarized in the $x'$-direction by a piece of polaroid set at the angle $\theta$ and arrives at a polaroid at the angle zero—as in Fig. 11-3; what will happen? With what probability will it get through? The answer is the following. After it gets through the first polaroid, it is definitely in the state $\ket{x'}$. The second polaroid will let the photon through if it is in the state $\ket{x}$ (but absorb it if it is the state $\ket{y}$). So we are asking with what probability does the photon appear to be in the state $\ket{x}$? We get that probability from the absolute square of amplitude $\braket{x}{x'}$ that a photon in the state $\ket{x'}$ is also in the state $\ket{x}$. What is $\braket{x}{x'}$? Just multiply Eq. (11.33) by $\bra{x}$ to get \begin{equation*} \braket{x}{x'}=\cos\theta\,\braket{x}{x}+\sin\theta\,\braket{x}{y}. \end{equation*} Now $\braket{x}{y}=0$, from the physics—as they must be if $\ket{x}$ and $\ket{y}$ are base states—and $\braket{x}{x}=1$. So we get \begin{equation*} \braket{x}{x'}=\cos\theta, \end{equation*} and the probability is $\cos^2\theta$. For example, if the first polaroid is set at $30^\circ$, a photon will get through $3/4$ of the time, and $1/4$ of the time it will heat the polaroid by being absorbed therein. Now let us see what happens classically in the same situation. We would have a beam of light with an electric field which is varying in some way or another—say “unpolarized.” After it gets through the first polaroid, the electric field is oscillating in the $x'$-direction with a size $\Efield$; we would draw the field as an oscillating vector with a peak value $\Efield_0$ in a diagram like Fig. 11-4. Now when the light arrives at the second polaroid, only the $x$-component, $\Efield_0\cos\theta$, of the electric field gets through. The intensity is proportional to the square of the field and, therefore, to $\Efield_0^2\cos^2\theta$. So the energy coming through is $\cos^2\theta$ weaker than the energy which was entering the last polaroid. The classical picture and the quantum picture give similar results. If you were to throw $10$ billion photons at the second polaroid, and the average probability of each one going through is, say, $3/4$, you would expect $3/4$ of $10$ billion would get through. Likewise, the energy that they would carry would be $3/4$ of the energy that you attempted to put through. The classical theory says nothing about the statistics of the thing—it simply says that the energy that comes through will be precisely $3/4$ of the energy which you were sending in. That is, of course, impossible if there is only one photon. There is no such thing as $3/4$ of a photon. It is either all there, or it isn’t there at all. Quantum mechanics tells us it is all there $3/4$ of the time. The relation of the two theories is clear. What about the other kinds of polarization? For example, right-hand circular polarization? In the classical theory, right-hand circular polarization has equal components in $x$ and $y$ which are $90^\circ$ out of phase. In the quantum theory, a right-hand circularly polarized (RHC) photon has equal amplitudes to be polarized $\ket{x}$ or $\ket{y}$, and the amplitudes are $90^\circ$ out of phase. Calling a RHC photon a state $\ket{R}$ and a LHC photon a state $\ket{L}$, we can write (see Vol. I, Section 33-1) \begin{equation} \begin{aligned} \ket{R}&=\frac{1}{\sqrt{2}}\,(\ket{x}+i\,\ket{y}),\\[4ex] \ket{L}&=\frac{1}{\sqrt{2}}\,(\ket{x}-i\,\ket{y}). \end{aligned} \label{Eq:III:11:34} \end{equation} —the $1/\sqrt{2}$ is put in to get normalized states. With these states you can calculate any filtering or interference effects you want, using the laws of quantum theory. If you want, you can also choose $\ket{R}$ and $\ket{L}$ as base states and represent everything in terms of them. You only need to show first that $\braket{R}{L}=0$—which you can do by taking the conjugate form of the first equation above [see Eq. (8.13)] and multiplying it by the other. You can resolve light into $x$- and $y$-polarizations, or into $x'$- and $y'$-polarizations, or into right and left polarizations as a basis. Just as an example, let’s try to turn our formulas around. Can we represent the state $\ket{x}$ as a linear combination of right and left? Yes, here it is: \begin{equation} \begin{aligned} \ket{x}&=\frac{1}{\sqrt{2}}\, (\ket{R}+\ket{L}),\\[4ex] \ket{y}&=-\frac{i}{\sqrt{2}}\, (\ket{R}-\ket{L}). \end{aligned} \label{Eq:III:11:35} \end{equation} Proof: Add and subtract the two equations in (11.34). It is easy to go from one base to the other. One curious point has to be made, though. If a photon is right circularly polarized, it shouldn’t have anything to do with the $x$- and $y$-axes. If we were to look at the same thing from a coordinate system turned at some angle about the direction of flight, the light would still be right circularly polarized—and similarly for left. The right and left circularly polarized light are the same for any such rotation; the definition is independent of any choice of the $x$-direction (except that the photon direction is given). Isn’t that nice—it doesn’t take any axes to define it. Much better than $x$ and $y$. On the other hand, isn’t it rather a miracle that when you add the right and left together you can find out which direction $x$ was? If “right” and “left” do not depend on $x$ in any way, how is it that we can put them back together again and get $x$? We can answer that question in part by writing out the state $\ket{R'}$, which represents a photon RHC polarized in the frame $x',y'$. In that frame, you would write \begin{equation*} \ket{R'}=\frac{1}{\sqrt{2}}\,(\ket{x'}+i\,\ket{y'}). \end{equation*} How does such a state look in the frame $x,y$? Just substitute $\ket{x'}$ from Eq. (11.33) and the corresponding $\ket{y'}$—we didn’t write it down, but it is $(-\sin\theta)\,\ket{x}+(\cos\theta)\,\ket{y}$. Then \begin{align*} \ket{R'}\!&=\!\frac{1}{\sqrt{2}}[ \cos\theta\,\ket{x}\!+\sin\theta\,\ket{y}\!- i\sin\theta\,\ket{x}\!+i\cos\theta\,\ket{y}]\\[2ex] &=\frac{1}{\sqrt{2}}[ (\cos\theta-i\sin\theta)\ket{x}\!+ i(\cos\theta-i\sin\theta)\ket{y}]\\[2ex] &=\frac{1}{\sqrt{2}}(\ket{x}\!+i\ket{y}) (\cos\theta-i\sin\theta). \end{align*} The first term is just $\ket{R}$, and the second is $e^{-i\theta}$; our result is that \begin{equation} \label{Eq:III:11:36} \ket{R'}=e^{-i\theta}\,\ket{R}. \end{equation} The states $\ket{R'}$ and $\ket{R}$ are the same except for the phase factor $e^{-i\theta}$. If you work out the same thing for $\ket{L'}$, you get that1 \begin{equation} \label{Eq:III:11:37} \ket{L'}=e^{+i\theta}\,\ket{L}. \end{equation} Now you see what happens. If we add $\ket{R}$ and $\ket{L}$, we get something different from what we get when we add $\ket{R'}$ and $\ket{L'}$. For instance, an $x$-polarized photon is [Eq. (11.35)] the sum of $\ket{R}$ and $\ket{L}$, but a $y$-polarized photon is the sum with the phase of one shifted $90^\circ$ backward and the other $90^\circ$ forward. That is just what we would get from the sum of $\ket{R'}$ and $\ket{L'}$ for the special angle $\theta=90^\circ$, and that’s right. An $x$-polarization in the prime frame is the same as a $y$-polarization in the original frame. So it is not exactly true that a circularly polarized photon looks the same for any set of axes. Its phase (the phase relation of the right and left circularly polarized states) keeps track of the $x$-direction. |
|
3 | 11 | More Two-State Systems | 5 | The neutral K-meson | We will now describe a two-state system in the world of the strange particles—a system for which quantum mechanics gives a most remarkable prediction. To describe it completely would involve us in a lot of stuff about strange particles, so we will, unfortunately, have to cut some corners. We can only give an outline of how a certain discovery was made—to show you the kind of reasoning that was involved. It begins with the discovery by Gell-Mann and Nishijima of the concept of strangeness and of a new law of conservation of strangeness. It was when Gell-Mann and Pais were analyzing the consequences of these new ideas that they came across the prediction of a most remarkable phenomenon we are going to describe. First, though, we have to tell you a little about “strangeness.” We must begin with what are called the strong interactions of nuclear particles. These are the interactions which are responsible for the strong nuclear forces—as distinct, for instance, from the relatively weaker electromagnetic interactions. The interactions are “strong” in the sense that if two particles get close enough to interact at all, they interact in a big way and produce other particles very easily. The nuclear particles have also what is called a “weak interaction” by which certain things can happen, such as beta decay, but always very slowly on a nuclear time scale—the weak interactions are many, many orders of magnitude weaker than the strong interactions and even much weaker than electromagnetic interactions. When the strong interactions were being studied with the big accelerators, people were surprised to find that certain things that “should” happen—that were expected to happen—did not occur. For instance, in some interactions a particle of a certain type did not appear when it was expected. Gell-Mann and Nishijima noticed that many of these peculiar happenings could be explained at once by inventing a new conservation law: the conservation of strangeness. They proposed that there was a new kind of attribute associated with each particle—which they called its “strangeness” number—and that in any strong interaction the “quantity of strangeness” is conserved. Suppose, for instance, that a high-energy negative K-meson—with, say, an energy of many GeV—collides with a proton. Out of the interaction may come many other particles: $\pi$-mesons, K-mesons, lambda particles, sigma particles—any of the mesons or baryons listed in Table 2–2 of Vol. I. It is observed, however, that only certain combinations appear, and never others. Now certain conservation laws were already known to apply. First, energy and momentum are always conserved. The total energy and momentum after an event must be the same as before the event. Second, there is the conservation of electric charge which says that the total charge of the outgoing particles must be equal to the total charge carried by the original particles. In our example of a K-meson and a proton coming together, the following reactions do occur: \begin{align} &\Kminus+\text{p}\to\text{p}+\Kminus+\pi^++\pi^-+\pi^0\notag \\ \label{Eq:III:11:38} \kern{-3em}\text{or}\\ % ebook remove % ebook insert: \makebox[0em]{\kern{-5em}\text{or}}\\ &\Kminus+\text{p}\to\Sigma^-+\pi^+.\notag \end{align} We would never get: \begin{equation} \label{Eq:III:11:39} \Kminus+\text{p}\to\text{p}+\Kminus+\pi^+ \quad\text{or}\quad \Kminus+\text{p}\to\Lambda^0+\pi^+, \end{equation}
\begin{align} &\Kminus+\text{p}\to\text{p}+\Kminus+\pi^+\notag\\ \label{Eq:III:11:39} \kern{-3em}\text{or}\\ % ebook remove % ebook insert: \makebox[0em]{\kern{-5em}\text{or}}\\ &\Kminus+\text{p}\to\Lambda^0+\pi^+,\notag \end{align} because of the conservation of charge. It was also known that the number of baryons is conserved. The number of baryons out must be equal to the number of baryons in. For this law, an antiparticle of a baryon is counted as minus one baryon. This means that we can—and do—see \begin{align} &\Kminus+\text{p}\to\Lambda^0+\pi^0\notag \\ \label{Eq:III:11:40} \kern{-3em}\text{or}\\ % ebook remove % ebook insert: \makebox[0em]{\kern{-5em}\text{or}}\\ &\Kminus+\text{p}\to\text{p}+\Kminus+\text{p}+\overline{\text{p}}\notag \end{align} (where $\overline{\text{p}}$ is the antiproton, which carries a negative charge). But we never see \begin{align} &\Kminus+\text{p}\to\Kminus+\pi^++\pi^0\notag \\ \label{Eq:III:11:41} \kern{-3em}\text{or}\\ % ebook remove % ebook insert: \makebox[0em]{\kern{-5em}\text{or}}\\ &\Kminus+\text{p}\to\text{p}+\Kminus+\text{n}\notag \end{align} (even when there is plenty of energy), because baryons would not be conserved. These laws, however, do not explain the strange fact that the following reactions—which do not immediately appear to be especially different from some of those in (11.38) or (11.40)—are also never observed: \begin{align} &\Kminus+\text{p}\to\text{p}+\Kminus+\Kzero\notag \\ \kern{-3em}\text{or}\notag\\ % ebook remove % ebook insert: \makebox[0em]{\kern{-5em}\text{or}}\notag\\ \label{Eq:III:11:42} &\Kminus+\text{p}\to\text{p}+\pi^-\\ \kern{-3em}\text{or}\notag\\ % ebook remove % ebook insert: \makebox[0em]{\kern{-5em}\text{or}}\notag\\ &\Kminus+\text{p}\to\Lambda^0+\Kzero.\notag \end{align} The explanation is the conservation of strangeness. With each particle goes a number—its strangeness $S$—and there is a law that in any strong interaction, the total strangeness out must equal the total strangeness that went in. The proton and antiproton ($\text{p}$, $\overline{\text{p}}$), the neutron and antineutron ($\text{n}$, $\overline{\text{n}}$), and the $\pi$-mesons ($\pi^+$, $\pi^0$, $\pi^-$) all have the strangeness number zero; the $\Kplus$ and $\Kzero$ mesons have strangeness $+1$; the $\Kminus$ and $\Kzerobar$ (the anti-$\Kzero$),3 the $\Lambda^0$ and the $\Sigma$-particles ($+$, $0$, $-$) have strangeness $-1$. There is also a particle with strangeness $-2$—the $\Xi$-particle (capital “ksi”)—and perhaps others as yet unknown. We have made a list of these strangenesses in Table 11–4. Let’s see how the strangeness conservation works in some of the reactions we have written down. If we start with a $\Kminus$ and a proton, we have a total strangeness of $(-1+0)=-1$. The conservation of strangeness says that the strangeness of products after the reaction must also add up to $-1$. You see that that is so for the reactions of (11.38) and (11.40). But in the reactions of (11.42) the strangeness of the right-hand side is zero in each case. Such reactions do not conserve strangeness, and do not occur. Why? Nobody knows. Nobody knows any more than what we have just told you about this. Nature just works that way. Now let’s look at the following reaction: a $\pi^-$ hits a proton. You might, for instance, get a $\Lambda^0$ particle plus a neutral K-particle—two neutral particles. Now which neutral K do you get? Since the $\Lambda$-particle has a strangeness $-1$ and the $\pi$ and $\text{p}^+$ have a strangeness zero, and since this is a fast production reaction, the strangeness must not change. The K-particle must have strangeness $+1$—it must therefore be the $\Kzero$. The reaction is \begin{equation*} \pi^-+\text{p}\to\Lambda^0+\Kzero, \end{equation*} with \begin{equation*} S=0+0=-1++1\quad(\text{conserved}). \end{equation*} If the $\Kzerobar$ were there instead of the $\Kzero$, the strangeness on the right would be $-2$—which nature does not permit, since the strangeness on the left side is zero. On the other hand, a $\Kzerobar$ can be produced in other reactions, such as \begin{gather*} \text{n}+\text{n}\to\text{n}+\overline{\text{p}}+\Kzerobar+\Kplus,\\ \\ S=0+0=0+0+-1++1 \end{gather*} or \begin{gather*} \Kminus+\text{p}\to\text{n}+\Kzerobar,\\ \\ S=-1+0=0+-1. \end{gather*} You may be thinking, “That’s all a lot of stuff, because how do you know whether it is a $\Kzerobar$ or a $\Kzero$? They look exactly the same. They are antiparticles of each other, so they have exactly the same mass, and both have zero electric charge. How do you distinguish them?” By the reactions they produce. For example, a $\Kzerobar$ can interact with matter to produce a $\Lambda$-particle, like this: \begin{equation*} \Kzerobar+\text{p}\to\Lambda^0+\pi^+, \end{equation*} but a $\Kzero$ cannot. There is no way a $\Kzero$ can produce a $\Lambda$-particle when it interacts with ordinary matter (protons and neutrons).4 So the experimental distinction between the $\Kzero$ and the $\Kzerobar$ would be that one of them will and one of them will not produce $\Lambda$'s. One of the predictions of the strangeness theory is then this—if, in an experiment with high-energy pions, a $\Lambda$-particle is produced with a neutral K-meson, then that neutral K-meson going into other pieces of matter will never produce a $\Lambda$. The experiment might run something like this. You send a beam of $\pi^-$-mesons into a large hydrogen bubble chamber. A $\pi^-$ track disappears, but somewhere else a pair of tracks appear (a proton and a $\pi^-$) indicating that a $\Lambda$-particle has disintegrated5 —see Fig. 11-5. Then you know that there is a $\Kzero$ somewhere which you cannot see. You can, however, figure out where it is going by using the conservation of momentum and energy. [It could reveal itself later by disintegrating into two charged particles, as shown in Fig. 11-5(a).] As the $\Kzero$ goes flying along, it may interact with one of the hydrogen nuclei (protons), producing perhaps some other particles. The prediction of the strangeness theory is that it will never produce a $\Lambda$-particle in a simple reaction like, say, \begin{equation*} \Kzero+\text{p}\to\Lambda^0+\pi^+, \end{equation*} although a $\Kzerobar$ can do just that. That is, in a bubble chamber a $\Kzerobar$ might produce the event sketched in Fig. 11-5(b)—in which the $\Lambda^0$ is seen because it decays—but a $\Kzero$ will not. That’s the first part of our story. That’s the conservation of strangeness. The conservation of strangeness is, however, not perfect. There are very slow disintegrations of the strange particles—decays taking a long6 time like $10^{-10}$ second in which the strangeness is not conserved. These are called the “weak” decays. For example, the $\Kzero$ disintegrates into a pair of $\pi$-mesons ($+$ and $-$) with a lifetime of $10^{-10}$ second. That was, in fact, the way K-particles were first seen. Notice that the decay reaction \begin{equation*} \Kzero\to\pi^++\pi^- \end{equation*} does not conserve strangeness, so it cannot go “fast” by the strong interaction; it can only go through the weak decay process. Now the $\Kzerobar$ also disintegrates in the same way—into a $\pi^+$ and a $\pi^-$—and also with the same lifetime \begin{equation*} \Kzerobar\to\pi^-+\pi^+. \end{equation*} Again we have a weak decay because it does not conserve strangeness. There is a principle that for any reaction there is the corresponding reaction with “matter” replaced by “antimatter” and vice versa. Since the $\Kzerobar$ is the antiparticle of the $\Kzero$, it should decay into the antiparticles of the $\pi^+$ and $\pi^-$, but the antiparticle of a $\pi^+$ is the $\pi^-$. (Or, if you prefer, vice versa. It turns out that for the $\pi$-mesons it doesn’t matter which one you call “matter.”) So as a consequence of the weak decays, the $\Kzero$ and $\Kzerobar$ can go into the same final products. When “seen” through their decays—as in a bubble chamber—they look like the same particle. Only their strong interactions are different. At last we are ready to describe the work of Gell-Mann and Pais. They first noticed that since the $\Kzero$ and the $\Kzerobar$ can both turn into states of two $\pi$-mesons there must be some amplitude that a $\Kzero$ can turn into a $\Kzerobar$, and also that a $\Kzerobar$ can turn into a $\Kzero$. Writing the reactions as one does in chemistry, we would have \begin{equation} \label{Eq:III:11:43} \Kzero\rightleftharpoons\pi^-+\pi^+\rightleftharpoons\Kzerobar. \end{equation} These reactions imply that there is some amplitude per unit time, say $-i/\hbar$ times $\bracket{\Kzerobar}{\text{W}}{\Kzero}$, that a $\Kzero$ will turn into a $\Kzerobar$ through the weak interaction responsible for the decay into two $\pi$-mesons. And there is the corresponding amplitude $\bracket{\Kzero}{\text{W}}{\Kzerobar}$ for the reverse process. Because matter and antimatter behave in exactly the same way, these two amplitudes are numerically equal; we’ll call them both $A$: \begin{equation} \label{Eq:III:11:44} \bracket{\Kzerobar}{\text{W}}{\Kzero}= \bracket{\Kzero}{\text{W}}{\Kzerobar}=A. \end{equation} Now—said Gell-Mann and Pais—here is an interesting situation. What people have been calling two distinct states of the world—the $\Kzero$ and the $\Kzerobar$—should really be considered as one two-state system, because there is an amplitude to go from one state to the other. For a complete treatment, one would, of course, have to deal with more than two states, because there are also the states of $2\pi$'s, and so on; but since they were mainly interested in the relation of $\Kzero$ and $\Kzerobar$, they did not have to complicate things and could make the approximation of a two-state system. The other states were taken into account to the extent that their effects appeared implicitly in the amplitudes of Eq. (11.44). Accordingly, Gell-Mann and Pais analyzed the neutral particle as a two-state system. They began by choosing as their two base states the states $\ket{\Kzero}$ and $\ket{\Kzerobar}$. (From here on, the story goes very much as it did for the ammonia molecule.) Any state $\ket{\psi}$ of the neutral K-particle could then be described by giving the amplitudes that it was in either base state. We’ll call these amplitudes \begin{equation} \label{Eq:III:11:45} C_+=\braket{\Kzero}{\psi},\quad C_-=\braket{\Kzerobar}{\psi}. \end{equation} The next step was to write the Hamiltonian equations for this two-state system. If there were no coupling between the $\Kzero$ and the $\Kzerobar$, the equations would be simply \begin{equation} \begin{aligned} i\hbar\,\ddt{C_+}{t}&=E_0C_+,\\[2ex] i\hbar\,\ddt{C_-}{t}&=E_0C_-. \end{aligned} \label{Eq:III:11:46} \end{equation} But since there is the amplitude $\bracket{\Kzero}{\text{W}}{\Kzerobar}$ for the $\Kzerobar$ to turn into a $\Kzero$ there should be the additional term \begin{equation*} \bracket{\Kzero}{\text{W}}{\Kzerobar}C_-=AC_- \end{equation*} added to the right-hand side of the first equation. And similarly, the term $AC_+$ should be inserted in the equation for the rate of change of $C_-$. But that’s not all. When the two-pion effect is taken into account there is an additional amplitude for the $\Kzero$ to turn into itself through the process \begin{equation*} \Kzero\to\pi^-+\pi^+\to\Kzero. \end{equation*} The additional amplitude, which we would write $\bracket{\Kzero}{\text{W}}{\Kzero}$, is just equal to the amplitude $\bracket{\Kzerobar}{\text{W}}{\Kzero}$, since the amplitudes to go to and from a pair of $\pi$-mesons are identical for the $\Kzero$ and the $\Kzerobar$. If you wish, the argument can be written out in detail like this. First write7 \begin{equation*} \bracket{\Kzerobar}{\text{W}}{\Kzero}= \bracket{\Kzerobar}{\text{W}}{2\pi} \bracket{2\pi}{\text{W}}{\Kzero} \end{equation*} and \begin{equation*} \bracket{\Kzero}{\text{W}}{\Kzero}= \bracket{\Kzero}{\text{W}}{2\pi} \bracket{2\pi}{\text{W}}{\Kzero}. \end{equation*} Because of the symmetry of matter and antimatter \begin{equation*} \bracket{2\pi}{\text{W}}{\Kzero}= \bracket{2\pi}{\text{W}}{\Kzerobar}, \end{equation*} and also \begin{equation*} \bracket{\Kzero}{\text{W}}{2\pi}= \bracket{\Kzerobar}{\text{W}}{2\pi}. \end{equation*} It then follows that $\bracket{\Kzero}{\text{W}}{\Kzero}= \bracket{\Kzerobar}{\text{W}}{\Kzero}$, and also that $\bracket{\Kzerobar}{\text{W}}{\Kzero}= \bracket{\Kzero}{\text{W}}{\Kzerobar}$, as we said earlier. Anyway, there are the two additional amplitudes $\bracket{\Kzero}{\text{W}}{\Kzero}$ and $\bracket{\Kzerobar}{\text{W}}{\Kzerobar}$, both equal to $A$, which should be included in the Hamiltonian equations. The first gives a term $AC_+$ on the right-hand side of the equation for $dC_+/dt$, and the second gives a new term $AC_-$ in the equation for $dC_-/dt$. Reasoning this way, Gell-Mann and Pais concluded that the Hamiltonian equations for the $\Kzero\,\Kzerobar$ system should be \begin{equation} \begin{aligned} i\hbar\,\ddt{C_+}{t}&=E_0C_++AC_-+AC_+,\\[2ex] i\hbar\,\ddt{C_-}{t}&=E_0C_-+AC_++AC_-. \end{aligned} \label{Eq:III:11:47} \end{equation} We must now correct something we have said in earlier chapters: that two amplitudes like $\bracket{\Kzero}{\text{W}}{\Kzerobar}$ and $\bracket{\Kzerobar}{\text{W}}{\Kzero}$ which are the reverse of each other, are always complex conjugates. That was true when we were talking about particles that did not decay. But if particles can decay—and can, therefore, become “lost”—the two amplitudes are not necessarily complex conjugates. So the equality of (11.44) does not mean that the amplitudes are real numbers; they are in fact complex numbers. The coefficient $A$ is, therefore, complex; and we can’t just incorporate it into the energy $E_0$. Having played often with electron spins and such, our heroes knew that the Hamiltonian equations of (11.47) meant that there was another pair of base states which could also be used to represent the K-particle system and which would have especially simple behaviors. They said, “Let’s take the sum and difference of these two equations. Also, let’s measure all our energies from $E_0$, and use units for energy and time that make $\hbar=1$.” (That’s what modern theoretical physicists always do. It doesn’t change the physics but makes the equations take on a simple form.) Their result: \begin{equation} \label{Eq:III:11:48} i\,\ddt{}{t}\,(C_++C_-)=2A(C_++C_-),\quad i\,\ddt{}{t}\,(C_+-C_-)=0. \end{equation}
\begin{equation} \begin{aligned} i\,\ddt{}{t}\,(C_++C_-)&=2A(C_++C_-),\\[2ex] i\,\ddt{}{t}\,(C_+-C_-)&=0. \end{aligned} \label{Eq:III:11:48} \end{equation}
It is apparent that the combinations of amplitudes $(C_++C_-)$ and $(C_+-C_-)$ act independently from each other (corresponding, of course, to the stationary states we have been studying earlier). So they concluded that it would be more convenient to use a different representation for the K-particle. They defined the two states \begin{equation} \label{Eq:III:11:49} \ket{\text{K}_1}=\frac{1}{\sqrt{2}}\, (\ket{\Kzero}+\ket{\Kzerobar}),\quad \ket{\text{K}_2}=\frac{1}{\sqrt{2}}\, (\ket{\Kzero}-\ket{\Kzerobar}). \end{equation}
\begin{equation} \begin{aligned} \ket{\text{K}_1}&=\frac{1}{\sqrt{2}}\, (\ket{\Kzero}+\ket{\Kzerobar}),\\[2ex] \ket{\text{K}_2}&=\frac{1}{\sqrt{2}}\, (\ket{\Kzero}-\ket{\Kzerobar}). \end{aligned} \label{Eq:III:11:49} \end{equation} They said that instead of thinking of the $\Kzero$ and $\Kzerobar$ mesons, we can equally well think in terms of the two “particles” (that is, “states”) K$_1$ and K$_2$. (These correspond, of course, to the states we have usually called $\ketsl{\slI}$ and $\ketsl{\slII}$. We are not using our old notation because we want now to follow the notation of the original authors—and the one you will see in physics seminars.) Now Gell-Mann and Pais didn’t do all this just to get different names for the particles—there is also some strange new physics in it. Suppose that $C_1$ and $C_2$ are the amplitudes that some state $\ket{\psi}$ will be either a K$_1$ or a K$_2$ meson: \begin{equation*} C_1=\braket{\text{K}_1}{\psi},\quad C_2=\braket{\text{K}_2}{\psi}. \end{equation*} From the equations of (11.49), \begin{equation} \label{Eq:III:11:50} C_1=\frac{1}{\sqrt{2}}\,(C_++C_-),\quad C_2=\frac{1}{\sqrt{2}}\,(C_+-C_-). \end{equation}
\begin{equation} \begin{aligned} C_1&=\frac{1}{\sqrt{2}}\,(C_++C_-),\\[2ex] C_2&=\frac{1}{\sqrt{2}}\,(C_+-C_-). \end{aligned} \label{Eq:III:11:50} \end{equation} Then the Eqs. (11.48) become \begin{equation} \label{Eq:III:11:51} i\,\ddt{C_1}{t}=2AC_1,\quad i\,\ddt{C_2}{t}=0. \end{equation} The solutions are \begin{equation} \label{Eq:III:11:52} C_1(t)=C_1(0)e^{-i2At},\quad C_2(t)=C_2(0), \end{equation} where, of course, $C_1(0)$ and $C_2(0)$ are the amplitudes at $t=0$. These equations say that if a neutral K-particle starts out in the state $\ket{\text{K}_1}$ at $t=0$ [then $C_1(0)=1$ and $C_2(0)=0$], the amplitudes at the time $t$ are \begin{equation*} C_1(t)=e^{-i2At},\quad C_2(t)=0. \end{equation*} Remembering that $A$ is a complex number, it is convenient to take $2A=\alpha-i\beta$. (Since the imaginary part of $2A$ turns out to be negative, we write it as minus $i\beta$.) With this substitution, $C_1(t)$ reads \begin{equation} \label{Eq:III:11:53} C_1(t)=C_1(0)e^{-\beta t}e^{-i\alpha t}. \end{equation} The probability of finding a K$_1$ particle at $t$ is the absolute square of this amplitude, which is $e^{-2\beta t}$. And, from Eqs. (11.52), the probability of finding the K$_2$ state at any time is zero. That means that if you make a K-particle in the state $\ket{\text{K}_1}$, the probability of finding it in the same state decreases exponentially with time—but you will never find it in state $\ket{\text{K}_2}$. Where does it go? It disintegrates into two $\pi$-mesons with the mean life $\tau=1/2\beta$ which is, experimentally, $10^{-10}$ sec. We made provisions for that when we said that $A$ was complex. On the other hand, Eq. (11.52) says that if we make a K-particle completely in the K$_2$ state, it stays that way forever. Well, that’s not really true. It is observed experimentally to disintegrate into three $\pi$-mesons, but $600$ times slower than the two-pion decay we have described. So there are some other small terms we have left out in our approximation. But so long as we are considering only the two-pion decay, the K$_2$ lasts “forever.” Now to finish the story of Gell-Mann and Pais. They went on to consider what happens when a K-particle is produced with a $\Lambda^0$ particle in a strong interaction. Since it must then have a strangeness of $+1$, it must be produced in the $\Kzero$ state. So at $t=0$ it is neither a K$_1$ nor a K$_2$ but a mixture. The initial conditions are \begin{equation*} C_+(0)=1,\quad C_-(0)=0. \end{equation*} But that means—from Eq. (11.50)—that \begin{equation*} C_1(0)=\frac{1}{\sqrt{2}},\quad C_2(0)=\frac{1}{\sqrt{2}}, \end{equation*} and—from Eqs. (11.52) and (11.53)—that \begin{equation} \label{Eq:III:11:54} C_1(t)=\frac{1}{\sqrt{2}}\, e^{-\beta t}e^{-i\alpha t},\quad C_2(t)=\frac{1}{\sqrt{2}}. \end{equation} Now remember that $\Kzero$ and $\Kzerobar$ are each linear combinations of K$_1$ and K$_2$. In Eqs. (11.54) the amplitudes have been chosen so that at $t=0$ the $\Kzerobar$ parts cancel each other out by interference, leaving only a $\Kzero$ state. But the $\ket{\text{K}_1}$ state changes with time, and the $\ket{\text{K}_2}$ state does not. After $t=0$ the interference of $C_1$ and $C_2$ will give finite amplitudes for both $\Kzero$ and $\Kzerobar$. What does all this mean? Let’s go back and think of the experiment we sketched in Fig. 11-5. A $\pi^-$ meson has produced a $\Lambda^0$ particle and a $\Kzero$ meson which is tooting along through the hydrogen in the chamber. As it goes along, there is some small but uniform chance that it will collide with a hydrogen nucleus. At first, we thought that strangeness conservation would prevent the K-particle from making a $\Lambda^0$ in such an interaction. Now, however, we see that that is not right. For although our K-particle starts out as a $\Kzero$—which cannot make a $\Lambda^0$—it does not stay this way. After a while, there is some amplitude that it will have flipped to the $\Kzerobar$ state. We can, therefore, sometimes expect to see a $\Lambda^0$ produced along the K-particle track. The chance of this happening is given by the amplitude $C_-$, which we can [by using Eq. (11.50) backwards] relate to $C_1$ and $C_2$. The relation is \begin{equation} \label{Eq:III:11:55} C_-=\frac{1}{\sqrt{2}}\,(C_1-C_2)= \tfrac{1}{2}(e^{-\beta t}e^{-i\alpha t}-1). \end{equation} As our K-particle goes along, the probability that it will “act like” a $\Kzerobar$ is equal to $\abs{C_-}^2$, which is \begin{equation} \label{Eq:III:11:56} \abs{C_-}^2=\tfrac{1}{4} (1+e^{-2\beta t}-2e^{-\beta t}\cos\alpha t). \end{equation} A complicated and strange result! This, then, is the remarkable prediction of Gell-Mann and Pais: when a $\Kzero$ is produced, the chance that it will turn into a $\Kzerobar$—as it can demonstrate by being able to produce a $\Lambda^0$—varies with time according to Eq. (11.56). This prediction came from using only sheer logic and the basic principles of the quantum mechanics—with no knowledge at all of the inner workings of the K-particle. Since nobody knows anything about the inner machinery, that is as far as Gell-Mann and Pais could go. They could not give any theoretical values for $\alpha$ and $\beta$. And nobody has been able to do so to this date. They were able to give a value of $\beta$ obtained from the experimentally observed rate of decay into two $\pi$'s ($2\beta=10^{10}$ sec$^{-1}$), but they could say nothing about $\alpha$. We have plotted the function of Eq. (11.56) for two values of $\alpha$ in Fig. 11-6. You can see that the form depends very much on the ratio of $\alpha$ to $\beta$. There is no $\Kzerobar$ probability at first; then it builds up. If $\alpha$ is large, the probability would have large oscillations. If $\alpha$ is small, there will be little or no oscillation—the probability will just rise smoothly to $1/4$. Now, typically, the K-particle will be travelling at a constant speed near the speed of light. The curves of Fig. 11-6 then also represent the probability along the track of observing a $\Kzerobar$—with typical distances of several centimeters. You can see why this prediction is so remarkably peculiar. You produce a single particle and instead of just disintegrating, it does something else. Sometimes it disintegrates, and other times it turns into a different kind of a particle. Its characteristic probability of producing an effect varies in a strange way as it goes along. There is nothing else quite like it in nature. And this most remarkable prediction was made solely by arguments about the interference of amplitudes. If there is any place where we have a chance to test the main principles of quantum mechanics in the purest way—does the superposition of amplitudes work or doesn’t it?—this is it. In spite of the fact that this effect has been predicted now for several years, there is no experimental determination that is very clear. There are some rough results which indicate that the $\alpha$ is not zero, and that the effect really occurs—they indicate that $\alpha$ is between $2\beta$ and $4\beta$. That’s all there is, experimentally. It would be very beautiful to check out the curve exactly to see if the principle of superposition really still works in such a mysterious world as that of the strange particles—with unknown reasons for the decays, and unknown reasons for the strangeness. The analysis we have just described is very characteristic of the way quantum mechanics is being used today in the search for an understanding of the strange particles. All the complicated theories that you may hear about are no more and no less than this kind of elementary hocus-pocus using the principles of superposition and other principles of quantum mechanics of that level. Some people claim that they have theories by which it is possible to calculate the $\beta$ and $\alpha$, or at least the $\alpha$ given the $\beta$, but these theories are completely useless. For instance, the theory that predicts the value of $\alpha$, given the $\beta$, tells us that the value of $\alpha$ should be infinite. The set of equations with which they originally start involves two $\pi$-mesons and then goes from the two $\pi$’s back to a $\Kzero$, and so on. When it’s all worked out, it does indeed produce a pair of equations like the ones we have here; but because there are an infinite number of states of two $\pi$'s, depending on their momenta, integrating over all the possibilities gives an $\alpha$ which is infinite. But nature’s $\alpha$ is not infinite. So the dynamical theories are wrong. It is really quite remarkable that the phenomena which can be predicted at all in the world of the strange particles come from the principles of quantum mechanics at the level at which you are learning them now. |
|
3 | 11 | More Two-State Systems | 6 | Generalization to $\boldsymbol{N}$-state systems | We have finished with all the two-state systems we wanted to talk about. In the following chapters we will go on to study systems with more states. The extension to $N$-state systems of the ideas we have worked out for two states is pretty straightforward. It goes like this. If a system has $N$ distinct states, we can represent any state $\ket{\psi(t)}$ as a linear combination of any set of base states $\ket{i}$, where $i=1$, $2$, $3$, $\ldots$, $N$; \begin{equation} \label{Eq:III:11:57} \ket{\psi(t)}=\sum_{\text{all $i$}}\ket{i}C_i(t). \end{equation} The coefficients $C_i(t)$ are the amplitudes $\braket{i}{\psi(t)}$. The behavior of the amplitudes $C_i$ with time is governed by the equations \begin{equation} \label{Eq:III:11:58} i\hbar\,\ddt{C_i(t)}{t}=\sum_jH_{ij}C_j, \end{equation} where the energy matrix $H_{ij}$ describes the physics of the problem. It looks the same as for two states. Only now, both $i$ and $j$ must range over all $N$ base states, and the energy matrix $H_{ij}$—or, if you prefer, the Hamiltonian—is an $N$ by $N$ matrix with $N^2$ numbers. As before, $H_{ij}\cconj=H_{ji}$—so long as particles are conserved—and the diagonal elements $H_{ii}$ are real numbers. We have found a general solution for the $C$’s of a two-state system when the energy matrix is constant (doesn’t depend on $t$). It is also not difficult to solve Eq. (11.58) for an $N$-state system when $H$ is not time dependent. Again, we begin by looking for a possible solution in which the amplitudes all have the same time dependence. We try \begin{equation} \label{Eq:III:11:59} C_i=a_ie^{-(i/\hbar)Et}. \end{equation} When these $C_i$’s are substituted into (11.58), the derivatives $dC_i(t)/dt$ become just $(-i/\hbar)EC_i$. Canceling the common exponential factor from all terms, we get \begin{equation} \label{Eq:III:11:60} Ea_i=\sum_jH_{ij}a_j. \end{equation} This is a set of $N$ linear algebraic equations for the $N$ unknowns $a_1$, $a_2$, $\ldots$, $a_N$, and there is a solution only if you are lucky—only if the determinant of the coefficients of all the $a$'s is zero. But it’s not necessary to be that sophisticated; you can just start to solve the equations any way you want, and you will find that they can be solved only for certain values of $E$. (Remember that $E$ is the only adjustable thing we have in the equations.) If you want to be formal, however, you can write Eq. (11.60) as \begin{equation} \label{Eq:III:11:61} \sum_j(H_{ij}-\delta_{ij}E)a_j=0. \end{equation} Then you can use the rule—if you know it—that these equations will have a solution only for those values of $E$ for which \begin{equation} \label{Eq:III:11:62} \Det\,(H_{ij}-\delta_{ij}E)=0. \end{equation} Each term of the determinant is just $H_{ij}$, except that $E$ is subtracted from every diagonal element. That is, (11.62) means just \begin{equation} \label{Eq:III:11:63} \Det \!\! % ebook remove \begin{pmatrix} H_{11}\!-\!E & H_{12} & H_{13} & \dots\\[1ex] H_{21} & H_{22}\!-\!E & H_{23} & \dots\\[1ex] H_{31} & H_{32} & H_{33}\!-\!E & \dots\\[1ex] \dots & \dots & \dots & \dots \end{pmatrix} \!\!=0.% ebook remove % ebook insert: =0. \end{equation} This is, of course, just a special way of writing an algebraic equation for $E$ which is the sum of a bunch of products of all the terms taken a certain way. These products will give all the powers of $E$ up to $E^N$. So we have an $N$th order polynomial equal to zero, and there are, in general, $N$ roots. (We must remember, however, that some of them may be multiple roots—meaning that two or more roots are equal.) Let’s call the $N$ roots \begin{equation} \label{Eq:III:11:64} E_{\slI},E_{\slII},E_{\slIII},\dotsc,E_{\bldn},\dotsc, E_{\bldN}. \end{equation} (We will use $\bldn$ to represent the $n$th Roman numeral, so that $\bldn$ takes on the values $\slI$, $\slII$, $\ldots$, $\bldN$.) It may be that some of these energies are equal—say $E_{\slII}=E_{\slIII}$—but we will still choose to call them by different names. The equations (11.60)—or (11.61)—have one solution for each value of $E$. If you put any one of the $E$'s—say $E_{\bldn}$—into (11.60) and solve for the $a_i$, you get a set which belongs to the energy $E_{\bldn}$. We will call this set $a_i(\bldn)$. Using these $a_i(\bldn)$ in Eq. (11.59), we have the amplitudes $C_i(\bldn)$ that the definite energy states are in the base state $\ket{i}$. Letting $\ket{\bldn}$ stand for the state vector of the definite energy state at $t=0$, we can write \begin{equation*} C_i(\bldn)=\braket{i}{\bldn}e^{-(i/\hbar)E_{\bldn}t}, \end{equation*} with \begin{equation} \label{Eq:III:11:65} \braket{i}{\bldn}=a_i(\bldn). \end{equation} The complete definite energy state $\ket{\psi_{\bldn}(t)}$ can then be written as \begin{equation*} \ket{\psi_{\bldn}(t)}=\sum_i\ket{i}a_i(\bldn)e^{-(i/\hbar)E_{\bldn}t}, \end{equation*} or \begin{equation} \label{Eq:III:11:66} \ket{\psi_{\bldn}(t)}=\ket{\bldn}e^{-(i/\hbar)E_{\bldn}t}. \end{equation} The state vectors $\ket{\bldn}$ describe the configuration of the definite energy states, but have the time dependence factored out. Then they are constant vectors which can be used as a new base set if we wish. Each of the states $\ket{\bldn}$ has the property—as you can easily show—that when operated on by the Hamiltonian operator $\Hop$ it gives just $E_{\bldn}$ times the same state: \begin{equation} \label{Eq:III:11:67} \Hop\,\ket{\bldn}=E_{\bldn}\,\ket{\bldn}. \end{equation} The energy $E_{\bldn}$ is, then, a number which is a characteristic of the Hamiltonian operator $\Hop$. As we have seen, a Hamiltonian will, in general, have several characteristic energies. In the mathematician’s world these would be called the “characteristic values” of the matrix $H_{ij}$. Physicists usually call them the “eigenvalues” of $\Hop$. (“Eigen” is the German word for “characteristic” or “proper.”) With each eigenvalue of $\Hop$—in other words, for each energy—there is the state of definite energy, which we have called the “stationary state.” Physicists usually call the states $\ket{\bldn}$ “the eigenstates of $\Hop$.” Each eigenstate corresponds to a particular eigenvalue $E_{\bldn}$. Now, generally, the states $\ket{\bldn}$—of which there are $N$—can also be used as a base set. For this to be true, all of the states must be orthogonal, meaning that for any two of them, say $\ket{\bldn}$ and $\ket{\bldm}$, \begin{equation} \label{Eq:III:11:68} \braket{\bldn}{\bldm}=0. \end{equation} This will be true automatically if all the energies are different. Also, we can multiply all the $a_i(\bldn)$ by a suitable factor so that all the states are normalized—by which we mean that \begin{equation} \label{Eq:III:11:69} \braket{\bldn}{\bldn}=1 \end{equation} for all $\bldn$. When it happens that Eq. (11.63) accidentally has two (or more) roots with the same energy, there are some minor complications. First, there are still two different sets of $a_i$'s which go with the two equal energies, but the states they give may not be orthogonal. Suppose you go through the normal procedure and find two stationary states with equal energies—let’s call them $\ket{\mu}$ and $\ket{\nu}$. Then it will not necessarily be so that they are orthogonal—if you are unlucky, \begin{equation*} \braket{\mu}{\nu}\neq0. \end{equation*} It is, however, always true that you can cook up two new states, which we will call $\ket{\mu'}$ and $\ket{\nu'}$, that have the same energies and are also orthogonal, so that \begin{equation} \label{Eq:III:11:70} \braket{\mu'}{\nu'}=0. \end{equation} You can do this by making $\ket{\mu'}$ and $\ket{\nu'}$ a suitable linear combination of $\ket{\mu}$ and $\ket{\nu}$, with the coefficients chosen to make it come out so that Eq. (11.70) is true. It is always convenient to do this. We will generally assume that this has been done so that we can always assume that our proper energy states $\ket{\bldn}$ are all orthogonal. We would like, for fun, to prove that when two of the stationary states have different energies they are indeed orthogonal. For the state $\ket{\bldn}$ with the energy $E_{\bldn}$, we have that \begin{equation} \label{Eq:III:11:71} \Hop\,\ket{\bldn}=E_{\bldn}\,\ket{\bldn}. \end{equation} This operator equation really means that there is an equation between numbers. Filling the missing parts, it means the same as \begin{equation} \label{Eq:III:11:72} \sum_j\bracket{i}{\Hop}{j}\braket{j}{\bldn}= E_{\bldn}\braket{i}{\bldn}. \end{equation} If we take the complex conjugate of this equation, we get \begin{equation} \label{Eq:III:11:73} \sum_j\bracket{i}{\Hop}{j}\cconj\braket{j}{\bldn}\cconj= E_{\bldn}\cconj\braket{i}{\bldn}\cconj. \end{equation} Remember now that the complex conjugate of an amplitude is the reverse amplitude, so (11.73) can be rewritten as \begin{equation} \label{Eq:III:11:74} \sum_j\braket{\bldn}{j}\bracket{j}{\Hop}{i}= E_{\bldn}\cconj\braket{\bldn}{i}. \end{equation} Since this equation is valid for any $i$, its “short form” is \begin{equation} \label{Eq:III:11:75} \bra{\bldn}\,\Hop=E_{\bldn}\cconj\bra{\bldn}, \end{equation} which is called the adjoint to Eq. (11.71). Now we can easily prove that $E_{\bldn}$ is a real number. We multiply Eq. (11.71) by $\bra{\bldn}$ to get \begin{equation} \label{Eq:III:11:76} \bracket{\bldn}{\Hop}{\bldn}=E_{\bldn}, \end{equation} since $\braket{\bldn}{\bldn}=1$. Then we multiply Eq. (11.75) on the right by $\ket{\bldn}$ to get \begin{equation} \label{Eq:III:11:77} \bracket{\bldn}{\Hop}{\bldn}=E_{\bldn}\cconj. \end{equation} Comparing (11.76) with (11.77) it is clear that \begin{equation} \label{Eq:III:11:78} E_{\bldn}=E_{\bldn}\cconj, \end{equation} which means that $E_{\bldn}$ is real. We can erase the star on $E_{\bldn}$ in Eq. (11.75). Finally we are ready to show that the different energy states are orthogonal. Let $\ket{\bldn}$ and $\ket{\bldm}$ be any two of the definite energy base states. Using Eq. (11.75) for the state $\bldm$, and multiplying it by $\ket{\bldn}$, we get that \begin{equation*} \bracket{\bldm}{\Hop}{\bldn}=E_{\bldm}\braket{\bldm}{\bldn}. \end{equation*} But if we multiply (11.71) by $\bra{\bldm}$, we get \begin{equation*} \bracket{\bldm}{\Hop}{\bldn}=E_{\bldn}\braket{\bldm}{\bldn}. \end{equation*} Since the left sides of these two equations are equal, the right sides are, also: \begin{equation} \label{Eq:III:11:79} E_{\bldm}\braket{\bldm}{\bldn}=E_{\bldn}\braket{\bldm}{\bldn}. \end{equation} If $E_{\bldm}=E_{\bldn}$ the equation does not tell us anything. But if the energies of the two states $\ket{\bldm}$ and $\ket{\bldn}$ are different ($E_{\bldm}\neq E_{\bldn}$), Eq. (11.79) says that $\braket{\bldm}{\bldn}$ must be zero, as we wanted to prove. The two states are necessarily orthogonal so long as $E_{\bldn}$ and $E_{\bldm}$ are numerically different. |
|
3 | 12 | The Hyperfine Splitting in Hydrogen | 1 | Base states for a system with two spin one-half particles | In this chapter we take up the “hyperfine splitting” of hydrogen, because it is a physically interesting example of what we can already do with quantum mechanics. It’s an example with more than two states, and it will be illustrative of the methods of quantum mechanics as applied to slightly more complicated problems. It is enough more complicated that once you see how this one is handled you can get immediately the generalization to all kinds of problems. As you know, the hydrogen atom consists of an electron sitting in the neighborhood of the proton, where it can exist in any one of a number of discrete energy states in each one of which the pattern of motion of the electron is different. The first excited state, for example, lies $3/4$ of a Rydberg, or about $10$ electron volts, above the ground state. But even the so-called ground state of hydrogen is not really a single, definite-energy state, because of the spins of the electron and the proton. These spins are responsible for the “hyperfine structure” in the energy levels, which splits all the energy levels into several nearly equal levels. The electron can have its spin either “up” or “down,” and the proton can also have its spin either “up” or “down.” There are, therefore, four possible spin states for every dynamical condition of the atom. That is, when people say “the ground state” of hydrogen, they really mean the “four ground states,” and not just the very lowest state. The four spin states do not all have exactly the same energy; there are slight shifts from the energies we would expect with no spins. The shifts are, however, much, much smaller than the $10$ electron volts or so from the ground state to the next state above. As a consequence, each dynamical state has its energy split into a set of very close energy levels—the so-called hyperfine splitting. The energy differences among the four spin states is what we want to calculate in this chapter. The hyperfine splitting is due to the interaction of the magnetic moments of the electron and proton, which gives a slightly different magnetic energy for each spin state. These energy shifts are only about ten-millionths of an electron volt—really very small compared with $10$ electron volts! It is because of this large gap that we can think about the ground state of hydrogen as a “four-state” system, without worrying about the fact that there are really many more states at higher energies. We are going to limit ourselves here to a study of the hyperfine structure of the ground state of the hydrogen atom. For our purposes we are not interested in any of the details about the positions of the electron and proton because that has all been worked out by the atom so to speak—it has worked itself out by getting into the ground state. We need know only that we have an electron and proton in the neighborhood of each other with some definite spatial relationship. In addition, they can have various different relative orientations of their spins. It is only the effect of the spins that we want to look into. The first question we have to answer is: What are the base states for the system? Now the question has been put incorrectly. There is no such thing as “the” base states, because, of course, the set of base states you may choose is not unique. New sets can always be made out of linear combinations of the old. There are always many choices for the base states, and among them, any choice is equally legitimate. So the question is not what is the base set, but what could a base set be? We can choose any one we wish for our own convenience. It is usually best to start with a base set which is physically the clearest. It may not be the solution to any problem, or may not have any direct importance, but it will generally make it easier to understand what is going on. We choose the following four base states: \begin{align*} &\textit{State 1:}\;\small \text{The electron and proton are both spin “up.”}\\[3pt] &\textit{State 2:}\;\small \text{The electron is “up” and the proton is “down.”}\\[3pt] &\textit{State 3:}\;\small \text{The electron is “down” and the proton is “up.”}\\[3pt] &\textit{State 4:}\;\small \text{The electron and proton are both “down.”} \end{align*} We need a handy notation for these four states, so we’ll represent them this way: \begin{equation} \begin{aligned} &\textit{State 1:}\;\small \ket{++};\;\text{electron}\;\textit{up},\;\text{proton}\;\textit{up}\\ &\textit{State 2:}\;\small \ket{+-};\;\text{electron}\;\textit{up},\;\text{proton}\;\textit{down}\\ &\textit{State 3:}\;\small \ket{-+};\;\text{electron}\;\textit{down},\;\text{proton}\;\textit{up}\\ &\textit{State 4:}\;\small \ket{--};\;\text{electron}\;\textit{down},\;\text{proton}\;\textit{down} \end{aligned} \label{Eq:III:12:1} \end{equation} You will have to remember that the first plus or minus sign refers to the electron and the second, to the proton. For handy reference, we’ve also summarized the notation in Fig. 12-1. Sometimes it will also be convenient to call these states $\ketsl{\slOne}$, $\ketsl{\slTwo}$, $\ketsl{\slThree}$, and $\ketsl{\slFour}$. You may say, “But the particles interact, and maybe these aren’t the right base states. It sounds as though you are considering the two particles independently.” Yes, indeed! The interaction raises the problem: what is the Hamiltonian for the system, but the interaction is not involved in the question of how to describe the system. What we choose for the base states has nothing to do with what happens next. It may be that the atom cannot ever stay in one of these base states, even if it is started that way. That’s another question. That’s the question: How do the amplitudes change with time in a particular (fixed) base? In choosing the base states, we are just choosing the “unit vectors” for our description. While we’re on the subject, let’s look at the general problem of finding a set of base states when there is more than one particle. You know the base states for a single particle. An electron, for example, is completely described in real life—not in our simplified cases, but in real life—by giving the amplitudes to be in each of the following states: \begin{equation*} \ket{\text{electron “up” with momentum $\FLPp$}} \end{equation*} or \begin{equation*} \ket{\text{electron “down” with momentum $\FLPp$}}. \end{equation*} There are really two infinite sets of states, one state for each value of $\FLPp$. That is to say that an electron state $\ket{\psi}$ is completely described if you know all the amplitudes \begin{equation*} \braket{+,\FLPp}{\psi}\quad \text{and}\quad \braket{-,\FLPp}{\psi}, \end{equation*} where the $+$ and $-$ represent the components of angular momentum along some axis—usually the $z$-axis—and $\FLPp$ is the vector momentum. There must, therefore, be two amplitudes for every possible momentum (a multi-infinite set of base states). That is all there is to describing a single particle. When there is more than one particle, the base states can be written in a similar way. For instance, if there were an electron and a proton in a more complicated situation than we are considering, the base states could be of the following kind: \begin{align*} |\,&\text{electron spin “up” with momentum $\FLPp_1$},\\[.5ex] &\text{proton spin “down” with momentum $\FLPp_2$}\rangle. \end{align*} And so on for other spin combinations. If there are more than two particles—same idea. So you see that to write down the possible base states is really very easy. The only problem is, what is the Hamiltonian? For our study of the ground state of hydrogen we don’t need to use the full sets of base states for the various momenta. We are specifying particular momentum states for the proton and electron when we say “the ground state.” The details of the configuration—the amplitudes for all the momentum base states—can be calculated, but that is another problem. Now we are concerned only with the effects of the spin, so we can take only the four base states of (12.1). Our next problem is: What is the Hamiltonian for this set of states? |
|
3 | 12 | The Hyperfine Splitting in Hydrogen | 2 | The Hamiltonian for the ground state of hydrogen | We’ll tell you in a moment what it is. But first, we should remind you of one thing: any state can always be written as a linear combination of the base states. For any state $\ket{\psi}$ we can write \begin{align} \ket{\psi}=\phantom{+}&\ket{+\,+}\braket{+\,+}{\psi}\,+\,\ket{+\,-}\braket{+\,-}{\psi}\notag\\[.5ex] \label{Eq:III:12:2} +\,&\ket{-\,+}\braket{-\,+}{\psi}\,+\,\ket{-\,-}\braket{-\,-}{\psi}. \end{align} Remember that the complete brackets are just complex numbers, so we can also write them in the usual fashion as $C_i$, where $i=1$, $2$, $3$, or $4$, and write Eq. (12.2) as \begin{equation} \label{Eq:III:12:3} \ket{\psi}=\ket{+\,+}C_1+\ket{+\,-}C_2+\ket{-\,+}C_3+\ket{-\,-}C_4. \end{equation}
\begin{align} \ket{\psi}=\phantom{+}&\ket{+\,+}C_1+\ket{+\,-}C_2\notag\\[.5ex] \label{Eq:III:12:3} +\,&\ket{-\,+}C_3+\ket{-\,-}C_4. \end{align} By giving the four amplitudes $C_i$ we completely describe the spin state $\ket{\psi}$. If these four amplitudes change with time, as they will, the rate of change in time is given by the operator $\Hop$. The problem is to find the $\Hop$. There is no general rule for writing down the Hamiltonian of an atomic system, and finding the right formula is much more of an art than finding a set of base states. We were able to tell you a general rule for writing a set of base states for any problem of a proton and an electron, but to describe the general Hamiltonian of such a combination is too hard at this level. Instead, we will lead you to a Hamiltonian by some heuristic argument—and you will have to accept it as the correct one because the results will agree with the test of experimental observation. You will remember that in the last chapter we were able to describe the Hamiltonian of a single, spin one-half particle by using the sigma matrices—or the exactly equivalent sigma operators. The properties of the operators are summarized in Table 12-1. These operators—which are just a convenient, shorthand way of keeping track of the matrix elements of the type $\bracket{+}{\sigma_z}{+}$—were useful for describing the behavior of a single particle of spin one-half. The question is: Can we find an analogous device to describe a system with two spins? The answer is yes, very simply, as follows. We invent a thing which we will call “sigma electron,” which we represent by the vector operator $\FLPsigmae$, and which has the $x$-, $y$-, and $z$-components, $\sigmae_x$, $\sigmae_y$, $\sigmae_z$. We now make the convention that when one of these things operates on any one of our four base states of the hydrogen atom, it acts only on the electron spin, and in exactly the same way as if the electron were all by itself. Example: What is $\sigmae_y\,\ket{-\,+}$? Since $\sigma_y$ on an electron “down” is $-i$ times the corresponding state with the electron “up”, \begin{equation*} \sigmae_y\,\ket{-\,+}=-i\,\ket{+\,+}. \end{equation*} (When $\sigmae_y$ acts on the combined state it flips over the electron, but does nothing to the proton and multiplies the result by $-i$.) Operating on the other states, $\sigmae_y$ would give \begin{align*} \sigmae_y\,\ket{+\,+}&=i\,\ket{-\,+},\\[1ex] \sigmae_y\,\ket{+\,-}&=i\,\ket{-\,-},\\[1ex] \sigmae_y\,\ket{-\,-}&=-i\,\ket{+\,-}. \end{align*} Just remember that the operators $\FLPsigmae$ work only on the first spin symbol—that is, on the electron spin. Next we define the corresponding operator “sigma proton” for the proton spin. Its three components $\sigmap_x$, $\sigmap_y$, $\sigmap_z$ act in the same way as $\FLPsigmae$, only on the proton spin. For example, if we have $\sigmap_x$ acting on each of the four base states, we get—always using Table 12-1— \begin{align*} \sigmap_x\,\ket{+\,+}&=\ket{+\,-},\\[1ex] \sigmap_x\,\ket{+\,-}&=\ket{+\,+},\\[1ex] \sigmap_x\,\ket{-\,+}&=\ket{-\,-},\\[1ex] \sigmap_x\,\ket{-\,-}&=\ket{-\,+}. \end{align*} As you can see, it’s not very hard. Now in the most general case we could have more complex things. For instance, we could have products of the two operators like $\sigmae_y\sigmap_z$. When we have such a product we do first what the operator on the right says, and then do what the other one says.1 For example, we would have that \begin{equation*} \sigmae_x\sigmap_z\,\ket{+\,-}= \sigmae_x(\sigmap_z\,\ket{+\,-})= \sigmae_x(-\,\ket{+\,-})= -\sigmae_x\,\ket{+\,-}= -\ket{-\,-}. \end{equation*}
\begin{gather*} \sigmae_x\sigmap_z\,\ket{+\,-}= \sigmae_x(\sigmap_z\,\ket{+\,-})=\\[1ex] \sigmae_x(-\,\ket{+\,-})= -\sigmae_x\,\ket{+\,-}= -\ket{-\,-}. \end{gather*} Note that these operators don’t do anything on pure numbers—we have used this fact when we wrote $\sigmae_x(-1)=(-1)\sigmae_x$. We say that the operators “commute” with pure numbers, or that a number “can be moved through” the operator. You can practice by showing that the product $\sigmae_x\sigmap_z$ gives the following results for the four states: \begin{align*} \sigmae_x\sigmap_z\,\ket{+\,+}&=+\,\ket{-\,+},\\[1ex] \sigmae_x\sigmap_z\,\ket{+\,-}&=-\,\ket{-\,-},\\[1ex] \sigmae_x\sigmap_z\,\ket{-\,+}&=+\,\ket{+\,+},\\[1ex] \sigmae_x\sigmap_z\,\ket{-\,-}&=-\,\ket{+\,-}. \end{align*} If we take all the possible operators, using each kind of operator only once, there are sixteen possibilities. Yes, sixteen—provided we include also the “unit operator” $\hat{1}$. First, there are the three: $\sigmae_x$, $\sigmae_y$, $\sigmae_z$. Then the three $\sigmap_x$, $\sigmap_y$, $\sigmap_z$—that makes six. In addition, there are the nine possible products of the form $\sigmae_x\sigmap_y$ which makes a total of 15. And there’s the unit operator which just leaves any state unchanged. Sixteen in all. Now note that for a four-state system, the Hamiltonian matrix has to be a four-by-four matrix of coefficients—it will have sixteen entries. It is easily demonstrated that any four-by-four matrix—and, therefore, the Hamiltonian matrix in particular—can be written as a linear combination of the sixteen double-spin matrices corresponding to the set of operators we have just made up. Therefore, for the interaction between a proton and an electron that involves only their spins, we can expect that the Hamiltonian operator can be written as a linear combination of the same $16$ operators. The only question is, how? Well, first, we know that the interaction doesn’t depend on our choice of axes for a coordinate system. If there is no external disturbance—like a magnetic field—to determine a unique direction in space, the Hamiltonian can’t depend on our choice of the direction of the $x$-, $y$-, and $z$-axes. That means that the Hamiltonian can’t have a term like $\sigmae_x$ all by itself. It would be ridiculous, because then somebody with a different coordinate system would get different results. The only possibilities are a term with the unit matrix, say a constant $a$ (times $\hat{1}$), and some combination of the sigmas that doesn’t depend on the coordinates—some “invariant” combination. The only scalar invariant combination of two vectors is the dot product, which for our $\sigma$’s is \begin{equation} \label{Eq:III:12:4} \FLPsigmae\cdot\FLPsigmap= \sigmae_x\sigmap_x+ \sigmae_y\sigmap_y+ \sigmae_z\sigmap_z. \end{equation} This operator is invariant with respect to any rotation of the coordinate system. So the only possibility for a Hamiltonian with the proper symmetry in space is a constant times the unit matrix plus a constant times this dot product, say, \begin{equation} \label{Eq:III:12:5} \Hop=E_0+A\,\FLPsigmae\cdot\FLPsigmap. \end{equation} That’s our Hamiltonian. It’s the only thing that it can be, by the symmetry of space, so long as there is no external field. The constant term doesn’t tell us much; it just depends on the level we choose to measure energies from. We may just as well take $E_0=0$. The second term tells us all we need to know to find the level splitting of the hydrogen. If you want to, you can think of the Hamiltonian in a different way. If there are two magnets near each other with magnetic moments $\FLPmu_{\text{e}}$ and $\FLPmu_{\text{p}}$, the mutual energy will depend on $\FLPmu_{\text{e}}\cdot\FLPmu_{\text{p}}$—among other things. And, you remember, we found that the classical thing we call $\FLPmu_{\text{e}}$ appears in quantum mechanics as $\mu_{\text{e}}\FLPsigmae$. Similarly, what appears classically as $\FLPmu_{\text{p}}$ will usually turn out in quantum mechanics to be $\mu_{\text{p}}\FLPsigmap$ (where $\mu_{\text{p}}$ is the magnetic moment of the proton, which is about $1000$ times smaller than $\mu_{\text{e}}$, and has the opposite sign). So Eq. (12.5) says that the interaction energy is like the interaction between two magnets—only not quite, because the interaction of the two magnets depends on the radial distance between them. But Eq. (12.5) could be—and, in fact, is—some kind of an average interaction. The electron is moving all around inside the atom, and our Hamiltonian gives only the average interaction energy. All it says is that for a prescribed arrangement in space for the electron and proton there is an energy proportional to the cosine of the angle between the two magnetic moments, speaking classically. Such a classical qualitative picture may help you to understand where it comes from, but the important thing is that Eq. (12.5) is the correct quantum mechanical formula. The order of magnitude of the classical interaction between two magnets would be the product of the two magnetic moments divided by the cube of the distance between them. The distance between the electron and the proton in the hydrogen atom is, speaking roughly, one half an atomic radius, or $0.5$ angstrom. It is, therefore, possible to make a crude estimate that the constant $A$ should be about equal to the product of the two magnetic moments $\mu_{\text{e}}$ and $\mu_{\text{p}}$ divided by the cube of $1/2$ angstrom. Such an estimate gives a number in the right ballpark. It turns out that $A$ can be calculated accurately once you understand the complete quantum theory of the hydrogen atom—which we so far do not. It has, in fact, been calculated to an accuracy of about $30$ parts in one million. So, unlike the flip-flop constant $A$ of the ammonia molecule, which couldn’t be calculated at all well by a theory, our constant $A$ for the hydrogen can be calculated from a more detailed theory. But never mind, we will for our present purposes think of the $A$ as a number which could be determined by experiment, and analyze the physics of the situation. Taking the Hamiltonian of Eq. (12.5), we can use it with the equation \begin{equation} \label{Eq:III:12:6} i\hbar\dot{C}_i=\sum_jH_{ij}C_j \end{equation} to find out what the spin interactions do to the energy levels. To do that, we need to work out the sixteen matrix elements $H_{ij}=\bracket{i}{H}{j}$ corresponding to each pair of the four base states in (12.1). We begin by working out what $\Hop\,\ket{j}$ is for each of the four base states. For example, \begin{equation} \label{Eq:III:12:7} \Hop\,\ket{+\,+}=A\FLPsigmae\cdot\FLPsigmap\,\ket{+\,+}= A\{\sigmae_x\sigmap_x+ \sigmae_y\sigmap_y+ \sigmae_z\sigmap_z\}\,\ket{+\,+}. \end{equation}
\begin{align} \Hop\,\ket{+\,+}&=A\FLPsigmae\cdot\FLPsigmap\,\ket{+\,+}\notag\\[1ex] \label{Eq:III:12:7} &=A\{\sigmae_x\sigmap_x+\sigmae_y\sigmap_y+\sigmae_z\sigmap_z\}\,\ket{+\,+}. \end{align} Using the method we described a little earlier—it’s easy if you have memorized Table 12-1—we find what each pair of $\sigma$’s does on $\ket{+\,+}$. The answer is \begin{equation} \begin{aligned} \sigmae_x\sigmap_x\,\ket{+\,+}&=+\,\ket{-\,-},\\[1ex] \sigmae_y\sigmap_y\,\ket{+\,+}&=-\,\ket{-\,-},\\[1ex] \sigmae_z\sigmap_z\,\ket{+\,+}&=+\,\ket{+\,+}. \end{aligned} \label{Eq:III:12:8} \end{equation} So (12.7) becomes \begin{equation} \label{Eq:III:12:9} \Hop\,\ket{+\,+}=A\{\ket{-\,-}-\ket{-\,-}+\ket{+\,+}\}=A\,\ket{+\,+}. \end{equation}
\begin{align} \Hop\,\ket{+\,+}&=A\{\ket{-\,-}-\ket{-\,-}+\ket{+\,+}\}\notag\\[1ex] \label{Eq:III:12:9} &=A\,\ket{+\,+}. \end{align} Since our four base states are all orthogonal, that gives us immediately that \begin{equation} \begin{aligned} \bracket{+\,+}{H}{+\,+}&=A\braket{+\,+}{+\,+}=A,\\[1ex] \bracket{+\,-}{H}{+\,+}&=A\braket{+\,-}{+\,+}=0,\\[1ex] \bracket{-\,+}{H}{+\,+}&=A\braket{-\,+}{+\,+}=0,\\[1ex] \bracket{-\,-}{H}{+\,+}&=A\braket{-\,-}{+\,+}=0. \end{aligned} \label{Eq:III:12:10} \end{equation} Remembering that $\bracket{j}{H}{i}=\bracket{i}{H}{j}\cconj$, we can already write down the differential equation for the amplitudes $C_1$: \begin{equation*} i\hbar\dot{C}_1=H_{11}C_1+H_{12}C_2+H_{13}C_3+H_{14}C_4 \end{equation*} or \begin{equation} \label{Eq:III:12:11} i\hbar\dot{C}_1=AC_1. \end{equation} That’s all! We get only the one term. Now to get the rest of the Hamiltonian equations we have to crank through the same procedure for $\Hop$ operating on the other states. First, we will let you practice by checking out all of the sigma products we have written down in Table 12-2. Then we can use them to get: \begin{equation} \begin{aligned} \Hop\,\ket{+\,-}&=A\{2\,\ket{-\,+}-\ket{+\,-}\},\\[1ex] \Hop\,\ket{-\,+}&=A\{2\,\ket{+\,-}-\ket{-\,+}\},\\[1ex] \Hop\,\ket{-\,-}&=A\,\ket{-\,-}. \end{aligned} \label{Eq:III:12:12} \end{equation} Then, multiplying each one in turn on the left by all the other state vectors, we get the following Hamiltonian matrix, $H_{ij}$: \begin{equation} \label{Eq:III:12:13} H_{ij}= \kern -1.5ex \raise 30 pt {\scriptstyle i\downarrow \,\raise 7 pt \scriptstyle j\rightarrow}\kern -13pt % ebook remove % ebook insert: \raise{40pt}{\scriptstyle i\downarrow \,\raise{12pt}{\scriptstyle j\rightarrow}\kern -15pt} \begin{pmatrix} A & 0 & 0 & 0\\[1ex] 0 & -A & 2A & 0\\[1ex] 0 & 2A & -A & 0\\[1ex] 0 & 0 & 0 & A \end{pmatrix}. \end{equation} It means, of course, nothing more than that our differential equations for the four amplitudes $C_i$ are \begin{equation} \begin{aligned} i\hbar\dot{C}_1&=AC_1,\\[1ex] i\hbar\dot{C}_2&=-AC_2+2AC_3,\\[1ex] i\hbar\dot{C}_3&=2AC_2-AC_3,\\[1ex] i\hbar\dot{C}_4&=AC_4. \end{aligned} \label{Eq:III:12:14} \end{equation} Before solving these equations we can’t resist telling you about a clever rule due to Dirac—it will make you feel that you are really advanced—although we don’t need it for our work. We have—from the equations (12.9) and (12.12)—that \begin{equation} \begin{aligned} \FLPsigmae\cdot\FLPsigmap\,\ket{+\,+}&=\ket{+\,+},\\[1ex] \FLPsigmae\cdot\FLPsigmap\,\ket{+\,-}&=2\,\ket{-\,+}-\ket{+\,-},\\[1ex] \FLPsigmae\cdot\FLPsigmap\,\ket{-\,+}&=2\,\ket{+\,-}-\ket{-\,+},\\[1ex] \FLPsigmae\cdot\FLPsigmap\,\ket{-\,-}&=\ket{-\,-}. \end{aligned} \label{Eq:III:12:15} \end{equation} Look, said Dirac, I can also write the first and last equations as \begin{align*} \FLPsigmae\cdot\FLPsigmap\,\ket{+\,+}&=2\,\ket{+\,+}-\ket{+\,+},\\[1ex] \FLPsigmae\cdot\FLPsigmap\,\ket{-\,-}&=2\,\ket{-\,-}-\ket{-\,-}; \end{align*} then they are all quite similar. Now I invent a new operator, which I will call $P_{\text{spin exch}}$ and which I define to have the following properties:2 \begin{align*} P_{\text{spin exch}}\,\ket{+\,+}&=\ket{+\,+},\\[1ex] P_{\text{spin exch}}\,\ket{+\,-}&=\ket{-\,+},\\[1ex] P_{\text{spin exch}}\,\ket{-\,+}&=\ket{+\,-},\\[1ex] P_{\text{spin exch}}\,\ket{-\,-}&=\ket{-\,-}. \end{align*} All the operator does is interchange the spin directions of the two particles. Then I can write the whole set of equations in (12.15) as a simple operator equation: \begin{equation} \label{Eq:III:12:16} \FLPsigmae\cdot\FLPsigmap=2P_{\text{spin exch}}-1. \end{equation} That’s the formula of Dirac. His “spin exchange operator” gives a handy rule for figuring out $\FLPsigmae\cdot\FLPsigmap$. (You see, you can do everything now. The gates are open.) |
|
3 | 12 | The Hyperfine Splitting in Hydrogen | 3 | The energy levels | Now we are ready to work out the energy levels of the ground state of hydrogen by solving the Hamiltonian equations (12.14). We want to find the energies of the stationary states. This means that we want to find those special states $\ket{\psi}$ for which each amplitude $C_i=\braket{i}{\psi}$ in the set belonging to $\ket{\psi}$ has the same time dependence—namely, $e^{-i\omega t}$. Then the state will have the energy $E=\hbar\omega$. So we want a set for which \begin{equation} \label{Eq:III:12:17} C_i=a_ie^{-(i/\hbar)Et}, \end{equation} where the four coefficients $a_i$ are independent of time. To see whether we can get such amplitudes, we substitute (12.17) into Eq. (12.14) and see what happens. Each $i\hbar\,dC/dt$ in Eq. (12.14) turns into $EC$, and—after cancelling out the common exponential factor—each $C$ becomes an $a$; we get \begin{equation} \begin{aligned} Ea_1&=Aa_1,\\[1ex] Ea_2&=-Aa_2+2Aa_3,\\[1ex] Ea_3&=2Aa_2-Aa_3,\\[1ex] Ea_4&=Aa_4, \end{aligned} \label{Eq:III:12:18} \end{equation} which we have to solve for $a_1$, $a_2$, $a_3$, and $a_4$. Isn’t it nice that the first equation is independent of the rest—that means we can see one solution right away. If we choose $E=A$, \begin{equation*} a_1=1,\quad a_2=a_3=a_4=0, \end{equation*} gives a solution. (Of course, taking all the $a$’s equal to zero also gives a solution, but that’s no state at all!) Let’s call our first solution the state $\ketsl{\slI}$:3 \begin{equation} \label{Eq:III:12:19} \ketsl{\slI}=\ketsl{\slOne}=\ket{+\,+}. \end{equation} Its energy is \begin{equation*} E_{\slI}=A. \end{equation*} With that clue you can immediately see another solution from the last equation in (12.18): \begin{align*} a_1&=a_2=a_3=0,\quad a_4=1,\\[5pt] E&=A. \end{align*} We’ll call that solution state $\ketsl{\slII}$: \begin{align} \label{Eq:III:12:20} \ketsl{\slII}&=\ketsl{\slFour}=\ket{-\,-},\\[5pt] E_{\slII}&=A.\notag \end{align} Now it gets a little harder; the two equations left in (12.18) are mixed up. But we’ve done it all before. Adding the two, we get \begin{equation} \label{Eq:III:12:21} E(a_2+a_3)=A(a_2+a_3). \end{equation} Subtracting, we have \begin{equation} \label{Eq:III:12:22} E(a_2-a_3)=-3A(a_2-a_3). \end{equation} By inspection—and remembering ammonia—we see that there are two solutions: \begin{alignat}{2} \notag a_2&=a_3,&\quad E& =A\\[-1ex] % ebook remove % ebook insert: =A\\ \label{Eq:III:12:23} \kern{-6em}\text{and}\\ % ebook remove % ebook insert: \makebox[0em]{\kern{-10em}\text{and}}\\ \notag a_2&=-a_3,&\quad E&=-3A. \end{alignat} They are mixtures of $\ketsl{\slTwo}$ and $\ketsl{\slThree}$. Calling these states $\ketsl{\slIII}$ and $\ketsl{\slIV}$, and putting in a factor $1/\sqrt{2}$ to make the states properly normalized, we have \begin{equation} \begin{aligned} \ketsl{\slIII}&=\frac{1}{\sqrt{2}}\,(\ketsl{\slTwo}+\ketsl{\slThree})= \frac{1}{\sqrt{2}}\,(\ket{+\,-}+\ket{-\,+}),\\[1ex] E_{\slIII}&=A \end{aligned} \label{Eq:III:12:24} \end{equation}
\begin{equation} \begin{aligned} \ketsl{\slIII}&=\frac{1}{\sqrt{2}}\,(\ketsl{\slTwo}+\ketsl{\slThree})\\[1ex] &=\frac{1}{\sqrt{2}}\,(\ket{+\,-}+\ket{-\,+}),\\[1.5ex] E_{\slIII}&=A \end{aligned} \label{Eq:III:12:24} \end{equation} and \begin{equation} \begin{aligned} \ketsl{\slIV}&=\frac{1}{\sqrt{2}}\,(\ketsl{\slTwo}-\ketsl{\slThree})= \frac{1}{\sqrt{2}}\,(\ket{+\,-}-\ket{-\,+}),\\[1ex] E_{\slIV}&=-3A. \end{aligned} \label{Eq:III:12:25} \end{equation}
\begin{equation} \begin{aligned} \ketsl{\slIV}&=\frac{1}{\sqrt{2}}\,(\ketsl{\slTwo}-\ketsl{\slThree})\notag\\[1ex] &=\frac{1}{\sqrt{2}}\,(\ket{+\,-}-\ket{-\,+}),\\[1.5ex] E_{\slIV}&=-3A. \end{aligned} \label{Eq:III:12:25} \end{equation} We have found four stationary states and their energies. Notice, incidentally, that our four states are orthogonal, so they also can be used for base states if desired. Our problem is completely solved. Three of the states have the energy $A$, and the last has the energy $-3A$. The average is zero—which means that when we took $E_0=0$ in Eq. (12.5), we were choosing to measure all the energies from the average energy. We can draw the energy-level diagram for the ground state of hydrogen as shown in Fig. 12-2. Now the difference in energy between state $\ketsl{\slIV}$ and any one of the others is $4A$. An atom which happens to have gotten into state $\ketsl{\slI}$ could fall from there to state $\ketsl{\slIV}$ and emit light. Not optical light, because the energy is so tiny—it would emit a microwave quantum. Or, if we shine microwaves on hydrogen gas, we will find an absorption of energy as the atoms in state $\ketsl{\slIV}$ pick up energy and go into one of the upper states—but only at the frequency $\omega=4A/\hbar$. This frequency has been measured experimentally; the best result, obtained very recently,4 is \begin{equation} \label{Eq:III:12:26} f=\omega/2\pi=(1{,}420{,}405{,}751.800\pm0.028)\text{ cycles per second}. \end{equation}
\begin{equation} \begin{gathered} f=\omega/2\pi=\\[.5ex] (1{,}420{,}405{,}751.800\pm0.028)\text{ cycles per second}. \end{gathered} \label{Eq:III:12:26} \end{equation} The error is only two parts in $100$ billion! Probably no basic physical quantity is measured better than that—it’s one of the most remarkably accurate measurements in physics. The theorists were very happy that they could compute the energy to an accuracy of $3$ parts in $10^5$, but in the meantime it has been measured to $2$ parts in $10^{11}$—a million times more accurate than the theory. So the experimenters are way ahead of the theorists. In the theory of the ground state of the hydrogen atom you are as good as anybody. You, too, can just take your value of $A$ from experiment—that’s what everybody has to do in the end. You have probably heard before about the “$21$-centimeter line” of hydrogen. That’s the wavelength of the $1420$ megacycle spectral line between the hyperfine states. Radiation of this wavelength is emitted or absorbed by the atomic hydrogen gas in the galaxies. So with radio telescopes tuned in to $21$-cm waves (or $1420$ megacycles approximately) we can observe the velocities and the location of concentrations of atomic hydrogen gas. By measuring the intensity, we can estimate the amount of hydrogen. By measuring the frequency shift due to the Doppler effect, we can find out about the motion of the gas in the galaxy. That is one of the big programs of radio astronomy. So now we are talking about something that’s very real—it is not an artificial problem. |
|
3 | 12 | The Hyperfine Splitting in Hydrogen | 4 | The Zeeman splitting | Although we have finished the problem of finding the energy levels of the hydrogen ground state, we would like to study this interesting system some more. In order to say anything more about it—for instance, in order to calculate the rate at which the hydrogen atom absorbs or emits radio waves at $21$ centimeters—we have to know what happens when the atom is disturbed. We have to do as we did for the ammonia molecule—after we found the energy levels we went on and studied what happened when the molecule was in an electric field. We were then able to figure out the effects from the electric field in a radio wave. For the hydrogen atom, the electric field does nothing to the levels, except to move them all by some constant amount proportional to the square of the field—which is not of any interest because that won’t change the energy differences. It is now the magnetic field which is important. So the next step is to write the Hamiltonian for a more complicated situation in which the atom sits in an external magnetic field. What, then, is the Hamiltonian? We’ll just tell you the answer, because we can’t give you any “proof” except to say that this is the way the atom works. The Hamiltonian is \begin{equation} \label{Eq:III:12:27} \Hop=A(\FLPsigmae\cdot\FLPsigmap)- \mu_{\text{e}}\FLPsigmae\cdot\FLPB- \mu_{\text{p}}\FLPsigmap\cdot\FLPB. \end{equation} It now consists of three parts. The first term $A\FLPsigmae\cdot\FLPsigmap$ represents the magnetic interaction between the electron and the proton—it is the same one that would be there if there were no magnetic field. This is the term we have already had; and the influence of the magnetic field on the constant $A$ is negligible. The effect of the external magnetic field shows up in the last two terms. The second term, $-\mu_{\text{e}}\FLPsigmae\cdot\FLPB$, is the energy the electron would have in the magnetic field if it were there alone.5 In the same way, the last term $-\mu_{\text{p}}\FLPsigmap\cdot\FLPB$, would have been the energy of a proton alone. Classically, the energy of the two of them together would be the sum of the two, and that works also quantum mechanically. In a magnetic field, the energy of interaction due to the magnetic field is just the sum of the energy of interaction of the electron with the external field, and of the proton with the field—both expressed in terms of the sigma operators. In quantum mechanics these terms are not really the energies, but thinking of the classical formulas for the energy is a way of remembering the rules for writing down the Hamiltonian. Anyway, the correct Hamiltonian is Eq. (12.27). Now we have to go back to the beginning and do the problem all over again. Much of the work is, however, done—we need only to add the effects of the new terms. Let’s take a constant magnetic field $\FLPB$ in the $z$-direction. Then we have to add to our Hamiltonian operator $\Hop$ the two new pieces—which we can call $\Hop'$: \begin{equation*} \Hop'=-(\mu_{\text{e}}\sigmae_z+\mu_{\text{p}}\sigmap_z)B. \end{equation*} Using Table 12-1, we get right away that \begin{equation} \begin{aligned} \Hop'\,\ket{+\,+}&=-(\mu_{\text{e}}+\mu_{\text{p}})B\,\ket{+\,+},\\[.5ex] \Hop'\,\ket{+\,-}&=-(\mu_{\text{e}}-\mu_{\text{p}})B\,\ket{+\,-},\\[.5ex] \Hop'\,\ket{-\,+}&=-(-\mu_{\text{e}}+\mu_{\text{p}})B\,\ket{-\,+},\\[.5ex] \Hop'\,\ket{-\,-}&=(\mu_{\text{e}}+\mu_{\text{p}})B\,\ket{-\,-}. \end{aligned} \label{Eq:III:12:28} \end{equation} How very convenient! The $\Hop'$ operating on each state just gives a number times that state. The matrix $\bracket{i}{H'}{j}$ has, therefore, only diagonal elements—we can just add the coefficients in (12.28) to the corresponding diagonal terms of (12.13), and the Hamiltonian equations of (12.14) become \begin{equation} \begin{aligned} i\hbar\,dC_1/dt&= \{A-(\mu_{\text{e}}+\mu_{\text{p}})B\}C_1,\\[1ex] i\hbar\,dC_2/dt&= -\{A+(\mu_{\text{e}}-\mu_{\text{p}})B\}C_2+2AC_3,\\[1ex] i\hbar\,dC_3/dt&= 2AC_2-\{A-(\mu_{\text{e}}-\mu_{\text{p}})B\}C_3,\\[1ex] i\hbar\,dC_4/dt&= \{A+(\mu_{\text{e}}+\mu_{\text{p}})B\}C_4. \end{aligned} \label{Eq:III:12:29} \end{equation} The form of the equations is not different—only the coefficients. So long as $B$ doesn’t vary with time, we can continue as we did before. Substituting $C_i=a_ie^{-(i/\hbar)Et}$, we get—as a modification of (12.18)— \begin{equation} \begin{aligned} Ea_1&=\{A-(\mu_{\text{e}}+\mu_{\text{p}})B\}a_1,\\[1ex] Ea_2&=-\{A+(\mu_{\text{e}}-\mu_{\text{p}})B\}a_2+2Aa_3,\\[1ex] Ea_3&=2Aa_2-\{A-(\mu_{\text{e}}-\mu_{\text{p}})B\}a_3,\\[1ex] Ea_4&=\{A+(\mu_{\text{e}}+\mu_{\text{p}})B\}a_4. \end{aligned} \label{Eq:III:12:30} \end{equation} Fortunately, the first and fourth equations are still independent of the rest, so the same technique works again. One solution is the state $\ketsl{\slI}$ for which $a_1=1$, $a_2=$ $a_3=$ $a_4=$ $0$, or \begin{align} \notag \ketsl{\slI}&=\ketsl{\slOne}=\ket{+\,+},\\ \label{Eq:III:12:31} \kern{-6em}\text{with}\\ % ebook remove % ebook insert: \makebox[0em]{\kern{-10em}\text{with}}\\ \notag E_{\slI}&=A-(\mu_{\text{e}}+\mu_{\text{p}})B. \end{align} Another is \begin{align} \notag \ketsl{\slII}&=\ketsl{\slFour}=\ket{-\,-},\\ \label{Eq:III:12:32} \kern{-6em}\text{with}\\ % ebook remove % ebook insert: \makebox[0em]{\kern{-10em}\text{with}}\\ \notag E_{\slII}&=A+(\mu_{\text{e}}+\mu_{\text{p}})B. \end{align} A little more work is involved for the remaining two equations, because the coefficients of $a_2$ and $a_3$ are no longer equal. But they are just like the pair we had for the ammonia molecule. Looking back at Eq. (9.20), we can make the following analogy (remembering that the labels $1$ and $2$ there correspond to $2$ and $3$ here): \begin{equation} \begin{aligned} H_{11}&\to-A-(\mu_{\text{e}}-\mu_{\text{p}})B,\\[1ex] H_{12}&\to2A,\\[1ex] H_{21}&\to2A,\\[1ex] H_{22}&\to-A+(\mu_{\text{e}}-\mu_{\text{p}})B. \end{aligned} \label{Eq:III:12:33} \end{equation} The energies are then given by (9.25), which was \begin{equation} \label{Eq:III:12:34} E=\!\frac{H_{11}\!+H_{22}}{2}\!\pm\! \sqrt{\frac{(H_{11}\!-H_{22})^2}{4}\!+\!H_{12}H_{21}}. \end{equation} Making the substitutions from (12.33), the energy formula becomes \begin{equation*} E=-A\pm\sqrt{(\mu_{\text{e}}-\mu_{\text{p}})^2B^2+4A^2}. \end{equation*} Although in Chapter 9 we used to call these energies $E_{\slI}$ and $E_{\slII}$, and we are in this problem calling them $E_{\slIII}$ and $E_{\slIV}$, \begin{equation} \begin{aligned} E_{\slIII}&= A\{-1+2\sqrt{1+(\mu_{\text{e}}-\mu_{\text{p}})^2B^2/4A^2}\},\\[1ex] E_{\slIV}&= -A\{1+2\sqrt{1+(\mu_{\text{e}}-\mu_{\text{p}})^2B^2/4A^2}\}. \end{aligned} \label{Eq:III:12:35} \end{equation} So we have found the energies of the four stationary states of a hydrogen atom in a constant magnetic field. Let’s check our results by letting $B$ go to zero and seeing whether we get the same energies we had in the preceding section. You see that we do. For $B=0$, the energies $E_{\slI}$, $E_{\slII}$, and $E_{\slIII}$ go to $+A$, and $E_{\slIV}$ goes to $-3A$. Even our labeling of the states agrees with what we called them before. When we turn on the magnetic field though, all of the energies change in a different way. Let’s see how they go. First, we have to remember that for the electron, $\mu_{\text{e}}$ is negative, and about $1000$ times larger than $\mu_{\text{p}}$—which is positive. So $\mu_{\text{e}}+\mu_{\text{p}}$ and $\mu_{\text{e}}-\mu_{\text{p}}$ are both negative numbers, and nearly equal. Let’s call them $-\mu$ and $-\mu'$: \begin{equation} \label{Eq:III:12:36} \mu=-(\mu_{\text{e}}+\mu_{\text{p}}),\quad \mu'=-(\mu_{\text{e}}-\mu_{\text{p}}). \end{equation} (Both $\mu$ and $\mu'$ are positive numbers, nearly equal to magnitude of $\mu_{\text{e}}$—which is about one Bohr magneton.) Then our four energies are \begin{equation} \begin{aligned} E_{\slI}&=A+\mu B,\\[1.3ex] E_{\slII}&=A-\mu B,\\[1ex] E_{\slIII}&=A\{-1+2\sqrt{1+\mu'^2B^2/4A^2}\},\\[1ex] E_{\slIV}&=-A\{1+2\sqrt{1+\mu'^2B^2/4A^2}\}. \end{aligned} \label{Eq:III:12:37} \end{equation} The energy $E_{\slI}$ starts at $A$ and increases linearly with $B$—with the slope $\mu$. The energy $E_{\slII}$ also starts at $A$ but decreases linearly with increasing $B$—its slope is $-\mu$. These two levels vary with $B$ as shown in Fig. 12-3. We show also in the figure the energies $E_{\slIII}$ and $E_{\slIV}$. They have a different $B$-dependence. For small $B$, they depend quadratically on $B$, so they start out with horizontal slopes. Then they begin to curve, and for large $B$ they approach straight lines with slopes $\pm\mu'$, which are nearly the same as the slopes of $E_{\slI}$ and $E_{\slII}$. The shift of the energy levels of an atom due to a magnetic field is called the Zeeman effect. We say that the curves in Fig. 12-3 show the Zeeman splitting of the ground state of hydrogen. When there is no magnetic field, we get just one spectral line from the hyperfine structure of hydrogen. The transitions between state $\ketsl{\slIV}$ and any one of the others occurs with the absorption or emission of a photon whose frequency $1420$ megacycles is $1/h$ times the energy difference $4A$. When the atom is in a magnetic field $\FLPB$, however, there are many more lines. There can be transitions between any two of the four states. So if we have atoms in all four states, energy can be absorbed—or emitted—in any one of the six transitions shown by the vertical arrows in Fig. 12-4. Many of these transitions can be observed by the Rabi molecular beam technique we described in Volume II, Section 35-3. What makes the transitions go? The transitions will occur if you apply a small disturbing magnetic field that varies with time (in addition to the steady strong field $\FLPB$). It’s just as we saw for a varying electric field on the ammonia molecule. Only here, it is the magnetic field which couples with the magnetic moments and does the trick. But the theory follows through in the same way that we worked it out for the ammonia. The theory is the simplest if you take a perturbing magnetic field that rotates in the $xy$-plane—although any horizontal oscillating field will do. When you put in this perturbing field as an additional term in the Hamiltonian, you get solutions in which the amplitudes vary with time—as we found for the ammonia molecule. So you can calculate easily and accurately the probability of a transition from one state to another. And you find that it all agrees with experiment. |
|
3 | 12 | The Hyperfine Splitting in Hydrogen | 5 | The states in a magnetic field | We would like now to discuss the shapes of the curves in Fig. 12-3. In the first place, the energies for large fields are easy to understand, and rather interesting. For $B$ large enough (namely for $\mu B/A\gg1$) we can neglect the $1$ in the formulas of (12.37). The four energies become \begin{equation} \begin{alignedat}{2} E_{\slI}\;&=A+\mu B,&\quad E_{\slII}\;&=A-\mu B,\\[1ex] E_{\slIII}\;&=-A+\mu'B,&\quad E_{\slIV}\;&=-A-\mu'B. \end{alignedat} \label{Eq:III:12:38} \end{equation} These are the equations of the four straight lines in Fig. 12-3. We can understand these energies physically in the following way. The nature of the stationary states in a zero field is determined completely by the interaction of the two magnetic moments. The mixtures of the base states $\ket{+\,-}$ and $\ket{-\,+}$ in the stationary states $\ketsl{\slIII}$ and $\ketsl{\slIV}$ are due to this interaction. In large external fields, however, the proton and electron will be influenced hardly at all by the field of the other; each will act as if it were alone in the external field. Then—as we have seen many times—the electron spin will be either parallel to or opposite to the external magnetic field. Suppose the electron spin is “up”—that is, along the field; its energy will be $-\mu_{\text{e}}B$. The proton can still be either way. If the proton spin is also “up,” its energy is $-\mu_{\text{p}}B$. The sum of the two is $-(\mu_{\text{e}}+\mu_{\text{p}})B=\mu B$. That is just what we find for $E_{\slI}$—which is fine, because we are describing the state $\ket{+\,+}=\ketsl{\slI}$. There is still the small additional term $A$ (now $\mu B\gg A$) which represents the interaction energy of the proton and electron when their spins are parallel. (We originally took $A$ as positive because the theory we spoke of says it should be, and experimentally it is indeed so.) On the other hand, the proton can have its spin down. Then its energy in the external field goes to $+\mu_{\text{p}}B$, so it and the electron have the energy $-(\mu_{\text{e}}-\mu_{\text{p}})B=\mu'B$. And the interaction energy becomes $-A$. The sum is just the energy $E_{\slIII}$, in (12.38). So the state $\ketsl{\slIII}$ must for large fields become the state $\ket{+\,-}$. Suppose now the electron spin is “down.” Its energy in the external field is $\mu_{\text{e}}B$. If the proton is also “down,” the two together have the energy $(\mu_{\text{e}}+\mu_{\text{p}})B=-\mu B$, plus the interaction energy $A$—since their spins are parallel. That makes just the energy $E_{\slII}$ in (12.38) and corresponds to the state $\ket{-\,-}=\ketsl{\slII}$—which is nice. Finally if the electron is “down” and the proton is “up,” we get the energy $(\mu_{\text{e}}-\mu_{\text{p}})B-A$ (minus $A$ for the interaction because the spins are opposite) which is just $E_{\slIV}$. And the state corresponds to $\ket{-\,+}$. “But, wait a moment!”, you are probably saying, “The states $\ketsl{\slIII}$ and $\ketsl{\slIV}$ are not the states $\ket{+\,-}$ and $\ket{-\,+}$; they are mixtures of the two.” Well, only slightly. They are indeed mixtures for $B=0$, but we have not yet figured out what they are for large $B$. When we used the analogies of (12.33) in our formulas of Chapter 9 to get the energies of the stationary states, we could also have taken the amplitudes that go with them. They come from Eq. (9.24), which is \begin{equation*} \frac{a_2}{a_3}=\frac{E-H_{22}}{H_{21}}. \end{equation*} The ratio $a_2/a_3$ is, of course, just $C_2/C_3$. Plugging in the analogous quantities from (12.33), we get \begin{align} \frac{C_2}{C_3}&= \frac{E+A-(\mu_{\text{e}}-\mu_{\text{p}})B}{2A}\notag\\ \kern{-6em}\text{or}\notag\\ % ebook remove % ebook insert: \makebox[0em]{\kern{-10em}\text{or}}\notag\\ \label{Eq:III:12:39} \frac{C_2}{C_3}&=\frac{E+A+\mu'B}{2A}, \end{align} where for $E$ we are to use the appropriate energy—either $E_{\slIII}$ or $E_{\slIV}$. For instance, for state $\ketsl{\slIII}$ we have \begin{equation} \label{Eq:III:12:40} \biggl(\frac{C_2}{C_3}\biggr)_{\slIII}\approx\frac{\mu'B}{A}. \end{equation} So for large $B$ the state $\ketsl{\slIII}$ has $C_2\gg C_3$; the state becomes almost completely the state $\ketsl{\slTwo}=\ket{+\,-}$. Similarly, if we put $E_{\slIV}$ into (12.39) we get $(C_2/C_3)_{\slIV}\ll1$; for high fields state $\ketsl{\slIV}$ becomes just the state $\ketsl{\slThree}=\ket{-\,+}$. You see that the coefficients in the linear combinations of our base states which make up the stationary states depend on $B$. The state we call $\ketsl{\slIII}$ is a $50$-$50$ mixture of $\ket{+\,-}$ and $\ket{-\,+}$ at very low fields, but shifts completely over to $\ket{+\,-}$ at high fields. Similarly, the state $\ketsl{\slIV}$, which at low fields is also a $50$-$50$ mixture (with opposite signs) of $\ket{+\,-}$ and $\ket{-\,+}$, goes over into the state $\ket{-\,+}$ when the spins are uncoupled by a strong external field. We would also like to call your attention particularly to what happens at very low magnetic fields. There is one energy—at $-3A$—which does not change when you turn on a small magnetic field. And there is another energy—at $+A$—which splits into three different energy levels when you turn on a small magnetic field. For weak fields the energies vary with $B$ as shown in Fig. 12–5. Suppose that we have somehow selected a bunch of hydrogen atoms which all have the energy $-3A$. If we put them through a Stern-Gerlach experiment—with fields that are not too strong—we would find that they just go straight through. (Since their energy doesn’t depend on $B$, there is—according to the principle of virtual work—no force on them in a magnetic field gradient.) Suppose, on the other hand, we were to select a bunch of atoms with the energy $+A$, and put them through a Stern-Gerlach apparatus, say an $S$ apparatus. (Again the fields in the apparatus should not be so great that they disrupt the insides of the atom, by which we mean a field small enough that the energies vary linearly with $B$.) We would find three beams. The states $\ketsl{\slI}$ and $\ketsl{\slII}$ get opposite forces—their energies vary linearly with $B$ with the slopes $\pm\mu$ so the forces are like those on a dipole with $\mu_z=\mp\mu$; but the state $\ketsl{\slIII}$ goes straight through. So we are right back in Chapter 5. A hydrogen atom with the energy $+A$ is a spin-one particle. This energy state is a “particle” for which $j=1$, and it can be described—with respect to some set of axes in space—in terms of the base states $\ket{+S}$, $\ket{\OS}$, and $\ket{-S}$ we used in Chapter 5. On the other hand, when a hydrogen atom has the energy $-3A$, it is a spin-zero particle. (Remember, what we are saying is only strictly true for infinitesimal magnetic fields.) So we can group the states of hydrogen in zero magnetic field this way: \begin{alignat}{2} & \left. \begin{aligned} \ketsl{\slI}&=\ket{+\,+}\\[2pt] \ketsl{\slIII}&=\dfrac{\ket{+\,-}+\ket{-\,+}}{\sqrt{2}}\\[3pt] \ketsl{\slII}&=\ket{-\,-}\\ \end{aligned} \right\} && \;\text{spin $1$}\; \left\{ \begin{array}{@{}l@{}} \ket{+S}\\[2pt] \vphantom{\dfrac{\ket{+\,-}+\ket{-\,+}}{\sqrt{2}}}\ket{\OS}\\[3pt] \ket{-S}\\ \label{Eq:III:12:41} \end{array} \right.\\[1ex] & \left. % ebook insert: \phantom{0} \ketsl{\slIV}=\dfrac{\ket{+\,-}-\ket{-\,+}}{\sqrt{2}} \right. && \;\text{spin $0$.} \label{Eq:III:12:42} \end{alignat} We have said in Chapter 35 of Volume II that for any particle its component of angular momentum along any axis can have only certain values always $\hbar$ apart. The $z$-component of angular momentum $J_z$ can be $j\hbar$, $(j-1)\hbar$, $(j-2)\hbar$, $\ldots$, $(-j)\hbar$, where $j$ is the spin of the particle (which can be an integer or half-integer). Although we neglected to say so at the time, people usually write \begin{equation} \label{Eq:III:12:43} J_z=m\hbar, \end{equation} where $m$ stands for one of the numbers $j$, $j-1$, $j-2$, $\ldots$, $-j$. You will, therefore, see people in books label the four ground states of hydrogen by the so-called quantum numbers $j$ and $m$ [often called the “total angular momentum quantum number” ($j$) and “magnetic quantum number” ($m$)]. Then, instead of our state symbols $\ketsl{\slI}$, $\ketsl{\slII}$, and so on, they will write a state as $\ket{j,m}$. So they would write our little table of states for zero field in (12.41) and (12.42) as shown in Table 12-3. It’s not new physics, it’s all just a matter of notation. |
|
3 | 12 | The Hyperfine Splitting in Hydrogen | 6 | The projection matrix for spin one | We would like now to use our knowledge of the hydrogen atom to do something special. We discussed in Chapter 5 that a particle of spin one which was in one of the base states ($+$, $0$, or $-$) with respect to a Stern-Gerlach apparatus of a particular orientation—say an $S$ apparatus—would have a certain amplitude to be in each of the three states with respect to a $T$ apparatus with a different orientation in space. There are nine such amplitudes $\braket{jT}{iS}$ which make up the projection matrix. In Section 5–7 we gave without proof the terms of this matrix for various orientations of $T$ with respect to $S$. Now we will show you one way they can be derived. In the hydrogen atom we have found a spin-one system which is made up of two spin one-half particles. We have already worked out in Chapter 6 how to transform the spin one-half amplitudes. We can use this information to calculate the transformation for spin one. This is the way it works: We have a system—a hydrogen atom with the energy $+A$—which has spin one. Suppose we run it through a Stern-Gerlach filter $S$, so that we know it is in one of the base states with respect to $S$, say $\ket{+S}$. What is the amplitude that it will be in one of the base states, say $\ket{+T}$, with respect to the $T$ apparatus? If we call the coordinate system of the $S$ apparatus the $x,y,z$ system, the $\ket{+S}$ state is what we have been calling the state $\ket{+\,+}$. But suppose another guy took his $z$-axis along the axis of $T$. He will be referring his states to what we will call the $x',y',z'$ frame. His “up” and “down” states for the electron and proton would be different from ours. His “plus-plus” state—which we can write $\ket{+'\,+'}$, referring to the “prime” frame—is the $\ket{+T}$ state of the spin-one particle. What we want is $\braket{+T}{+S}$ which is just another way of writing the amplitude $\braket{+'\,+'}{+\,+}$. We can find the amplitude $\braket{+'\,+'}{+\,+}$ in the following way. In our frame the electron in the $\ket{+\,+}$ state has its spin “up”. That means that it has some amplitude $\braket{+'}{+}_{\text{e}}$ of being “up” in his frame, and some amplitude $\braket{-'}{+}_{\text{e}}$ of being “down” in that frame. Similarly, the proton in the $\ket{+\,+}$ state has spin “up” in our frame and the amplitudes $\braket{+'}{+}_{\text{p}}$ and $\braket{-'}{+}_{\text{p}}$ of having spin “up” or spin “down” in the “prime” frame. Since we are talking about two distinct particles, the amplitude that both particles will be “up” together in his frame is the product of the two amplitudes, \begin{equation} \label{Eq:III:12:44} \braket{+'\,+'}{+\,+}= \braket{+'}{+}_{\text{e}}\braket{+'}{+}_{\text{p}}. \end{equation} We have put the subscripts e and p on the amplitudes $\braket{+'}{+}$ to make it clear what we were doing. But they are both just the transformation amplitudes for a spin one-half particle, so they are really identical numbers. They are, in fact, just the amplitude we have called $\braket{+T}{+S}$ in Chapter 6, and which we listed in the tables at the end of that chapter. Now, however, we are about to get into trouble with notation. We have to be able to distinguish the amplitude $\braket{+T}{+S}$ for a spin one-half particle from what we have also called $\braket{+T}{+S}$ for a spin-one particle—yet they are completely different! We hope it won’t be too confusing, but for the moment at least, we will have to use some different symbols for the spin one-half amplitudes. To help you keep things straight, we summarize the new notation in Table 12-4. We will continue to use the notation $\ket{+S}$, $\ket{\OS}$, and $\ket{-S}$ for the states of a spin-one particle. With our new notation, Eq. (12.44) becomes simply \begin{equation*} \braket{+'\,+'}{+\,+}=a^2, \end{equation*} and this is just the spin-one amplitude $\braket{+T}{+S}$. Now, let’s suppose, for instance, that the other guy’s coordinate frame—that is, the $T$, or “primed,” apparatus—is just rotated with respect to our $z$-axis by the angle $\phi$; then from Table 6-2, \begin{equation*} a=\braket{+'}{+}=e^{i\phi/2}. \end{equation*} So from (12.44) we have that the spin-one amplitude is \begin{equation} \label{Eq:III:12:45} \braket{+T}{+S}=\braket{+'\,+'}{+\,+}= (e^{i\phi/2})^2=e^{i\phi}. \end{equation} You can see how it goes. Now we will work through the general case for all the states. If the proton and electron are both “up” in our frame—the $S$-frame—the amplitudes that it will be in any one of the four possible states in the other guy’s frame—the $T$-frame—are \begin{equation} \begin{aligned} \braket{+'\,+'}{+\,+}&= \braket{+'}{+}_{\text{e}}\braket{+'}{+}_{\text{p}}= a^2,\\[1ex] \braket{+'\,-'}{+\,+}&= \braket{+'}{+}_{\text{e}}\braket{-'}{+}_{\text{p}}= ab,\\[1ex] \braket{-'\,+'}{+\,+}&= \braket{-'}{+}_{\text{e}}\braket{+'}{+}_{\text{p}}= ba,\\[1ex] \braket{-'\,-'}{+\,+}&= \braket{-'}{+}_{\text{e}}\braket{-'}{+}_{\text{p}}= b^2. \end{aligned} \label{Eq:III:12:46} \end{equation} We can, then, write the state $\ket{+\,+}$ as the following linear combination: \begin{equation} \label{Eq:III:12:47} \ket{+\,+}=a^2\,\ket{+'\,+'}+ab\,\{\ket{+'\,-'}+\ket{-'\,+'}\}+ b^2\,\ket{-'\,-'}. \end{equation}
\begin{alignat}{3} \ket{+\,+}&=&\phantom{ab}&a^2&\,&\ket{+'\,+'}\notag\\[.5ex] &+&\,ab&\,\{&&\ket{+'\,-'}+\ket{-'\,+'}\;\}\notag\\[.5ex] \label{Eq:III:12:47} &+&&b^2&\,&\ket{-'\,-'}. \end{alignat} Now we notice that $\ket{+'\,+'}$ is the state $\ket{+T}$, that $\{\ket{+'\,-'}+\ket{-'\,+'}\}$ is just $\sqrt{2}$ times the state $\ket{\OT}$—see (12.41)—and that $\ket{-'\,-'}=\ket{-T}$. In other words, Eq. (12.47) can be rewritten as \begin{equation} \label{Eq:III:12:48} \ket{+S}=a^2\,\ket{+T}+\sqrt{2}\,ab\,\ket{\OT}+b^2\,\ket{-T}. \end{equation} In a similar way you can easily show that \begin{equation} \label{Eq:III:12:49} \ket{-S}=c^2\,\ket{+T}+\sqrt{2}\,cd\,\ket{\OT}+d^2\,\ket{-T}. \end{equation} For $\ket{\OS}$ it’s a little more complicated, because \begin{equation*} \ket{\OS}=\frac{1}{\sqrt{2}}\,\{\ket{+\,-}+\ket{-\,+}\}. \end{equation*} But we can express each of the states $\ket{+\,-}$ and $\ket{-\,+}$ in terms of the “prime” states and take the sum. That is, \begin{equation} \label{Eq:III:12:50} \ket{+\,-}= ac\,\ket{+'\,+'}+ad\,\ket{+'\,-'}+ bc\,\ket{-'\,+'}+bd\,\ket{-'\,-'} \end{equation}
\begin{alignat}{2} \ket{+\,-}&&=ac\,&\ket{+'\,+'}+ad\,\ket{+'\,-'}\notag\\[.5ex] \label{Eq:III:12:50} &&+\,bc\,&\ket{-'\,+'}+bd\,\ket{-'\,-'} \end{alignat} and \begin{equation} \label{Eq:III:12:51} \ket{-\,+}= ac\,\ket{+'\,+'}+bc\,\ket{+'\,-'}+ ad\,\ket{-'\,+'}+bd\,\ket{-'\,-'}. \end{equation}
\begin{alignat}{2} \ket{-\,+}&&=ac\,&\ket{+'\,+'}+bc\,\ket{+'\,-'}\notag\\[.5ex] \label{Eq:III:12:51} &&+\,ad\,&\ket{-'\,+'}+bd\,\ket{-'\,-'}. \end{alignat} Taking $1/\sqrt{2}$ times the sum, we get \begin{equation*} \ket{\OS}=\frac{2}{\sqrt{2}}\,ac\,\ket{+'\,+'}+ \frac{ad+bc}{\sqrt{2}}\,\{\ket{+'\,-'}+\ket{-'\,+'}\}+ \frac{2}{\sqrt{2}}\,bd\,\ket{-'\,-'}. \end{equation*}
\begin{alignat*}{3} \ket{\OS}&=&&\phantom{a}\frac{2}{\sqrt{2}}\,ac&\,&\ket{+'\,+'}\\[.75ex] &+&&\frac{ad+bc}{\sqrt{2}}&\{\,&\ket{+'\,-'}+\ket{-'\,+'}\,\}\\[.75ex] &+&&\phantom{a}\frac{2}{\sqrt{2}}\,bd&\,&\ket{-'\,-'}. \end{alignat*} It follows that \begin{equation} \label{Eq:III:12:52} \ket{\OS}=\sqrt{2}\,ac\,\ket{+T}+(ad+bc)\,\ket{\OT}+ \sqrt{2}\,bd\,\ket{-T}. \end{equation}
\begin{align} \ket{\OS}&=\sqrt{2}ac\,\ket{+T}\notag\\[.5ex] &+(ad+bc)\,\ket{\OT}\notag\\[.5ex] \label{Eq:III:12:52} &+\sqrt{2}\,bd\,\ket{-T}. \end{align}
We have now all of the amplitudes we wanted. The coefficients of Eqs. (12.48), (12.49), and (12.52) are the matrix elements $\braket{jT}{iS}$. Let’s pull them all together: \begin{equation} \label{Eq:III:12:53} \braket{jT}{iS}= \kern -2.5ex \raise 23 pt {\scriptstyle jT\downarrow \,\raise 7 pt \scriptstyle iS\rightarrow}\kern -19pt % ebook remove % ebook insert: \kern -1.5ex \raise{40pt}{\scriptstyle jT\downarrow \,\raise{7pt}{\scriptstyle iS\rightarrow}}\kern -23pt \begin{pmatrix} a^2 & \sqrt{2}ac & c^2\\[1ex] \sqrt{2}ab & ad\!+\!bc & \sqrt{2}cd\\[1ex] b^2 & \sqrt{2}bd & d^2 \end{pmatrix} \!\! % ebook remove . \end{equation} We have expressed the spin-one transformation in terms of the spin one-half amplitudes $a$, $b$, $c$, and $d$. For instance, if the $T$-frame is rotated with respect to $S$ by the angle $\alpha$ about the $y$-axis—as in Fig. 5-6—the amplitudes in Table 12-4 are just the matrix elements of $R_y(\alpha)$ in Table 6-2. \begin{equation} \begin{alignedat}{2} a\;&=\cos\frac{\alpha}{2},&\quad b\;&=-\sin\frac{\alpha}{2},\\[1ex] c\;&=\sin\frac{\alpha}{2},&\quad d\;&=\cos\frac{\alpha}{2}. \end{alignedat} \label{Eq:III:12:54} \end{equation} Using these in (12.53), we get the formulas of (5.38), which we gave there without proof. What ever happened to the state $\ketsl{\slIV}$?! Well, it is a spin-zero system, so it has only one state—it is the same in all coordinate systems. We can check that everything works out by taking the difference of Eq. (12.50) and (12.51); we get that \begin{equation*} \ket{+\,-}-\ket{-\,+}=(ad-bc)\{\ket{+'\,-'}-\ket{-'\,+'}\}. \end{equation*} But $(ad-bc)$ is the determinant of the spin one-half matrix, and so is equal to $1$. We get that \begin{equation*} \ketsl{\slIV'}=\ketsl{\slIV} \end{equation*} for any relative orientation of the two coordinate frames. |
|
3 | 13 | Propagation in a Crystal Lattice | 1 | States for an electron in a one-dimensional lattice | You would, at first sight, think that a low-energy electron would have great difficulty passing through a solid crystal. The atoms are packed together with their centers only a few angstroms apart, and the effective diameter of the atom for electron scattering is roughly an angstrom or so. That is, the atoms are large, relative to their spacing, so that you would expect the mean free path between collisions to be of the order of a few angstroms—which is practically nothing. You would expect the electron to bump into one atom or another almost immediately. Nevertheless, it is a ubiquitous phenomenon of nature that if the lattice is perfect, the electrons are able to travel through the crystal smoothly and easily—almost as if they were in a vacuum. This strange fact is what lets metals conduct electricity so easily; it has also permitted the development of many practical devices. It is, for instance, what makes it possible for a transistor to imitate the radio tube. In a radio tube electrons move freely through a vacuum, while in the transistor they move freely through a crystal lattice. The machinery behind the behavior of a transistor will be described in this chapter; the next one will describe the application of these principles in various practical devices. The conduction of electrons in a crystal is one example of a very common phenomenon. Not only can electrons travel through crystals, but other “things” like atomic excitations can also travel in a similar manner. So the phenomenon which we want to discuss appears in many ways in the study of the physics of the solid state. You will remember that we have discussed many examples of two-state systems. Let’s now think of an electron which can be in either one of two positions, in each of which it is in the same kind of environment. Let’s also suppose that there is a certain amplitude to go from one position to the other, and, of course, the same amplitude to go back, just as we have discussed for the hydrogen molecular ion in Section 10–1. The laws of quantum mechanics then give the following results. There are two possible states of definite energy for the electron. Each state can be described by the amplitude for the electron to be in each of the two basic positions. In either of the definite-energy states, the magnitudes of these two amplitudes are constant in time, and the phases vary in time with the same frequency. On the other hand, if we start the electron in one position, it will later have moved to the other, and still later will swing back again to the first position. The amplitude is analogous to the motions of two coupled pendulums. Now consider a perfect crystal lattice in which we imagine that an electron can be situated in a kind of “pit” at one particular atom and with some particular energy. Suppose also that the electron has some amplitude to move into a different pit at one of the nearby atoms. It is something like the two-state system—but with an additional complication. When the electron arrives at the neighboring atom, it can afterward move on to still another position as well as return to its starting point. Now we have a situation analogous not to two coupled pendulums, but to an infinite number of pendulums all coupled together. It is something like what you see in one of those machines—made with a long row of bars mounted on a torsion wire—that is used in first-year physics to demonstrate wave propagation. If you have a harmonic oscillator which is coupled to another harmonic oscillator, and that one to another, and so on …, and if you start an irregularity in one place, the irregularity will propagate as a wave along the line. The same situation exists if you place an electron at one atom of a long chain of atoms. Usually, the simplest way of analyzing the mechanical problem is not to think in terms of what happens if a pulse is started at a definite place, but rather in terms of steady-wave solutions. There exist certain patterns of displacements which propagate through the crystal as a wave of a single, fixed frequency. Now the same thing happens with the electron—and for the same reason, because it’s described in quantum mechanics by similar equations. You must appreciate one thing, however; the amplitude for the electron to be at a place is an amplitude, not a probability. If the electron were simply leaking from one place to another, like water going through a hole, the behavior would be completely different. For example, if we had two tanks of water connected by a tube to permit some leakage from one to the other, then the levels would approach each other exponentially. But for the electron, what happens is amplitude leakage and not just a plain probability leakage. And it’s a characteristic of the imaginary term—the $i$ in the differential equations of quantum mechanics—which changes the exponential solution to an oscillatory solution. What happens then is quite different from the leakage between interconnected tanks. We want now to analyze quantitatively the quantum mechanical situation. Imagine a one-dimensional system made of a long line of atoms as shown in Fig. 13–1(a). (A crystal is, of course, three-dimensional but the physics is very much the same; once you understand the one-dimensional case you will be able to understand what happens in three dimensions.) Next, we want to see what happens if we put a single electron on this line of atoms. Of course, in a real crystal there are already millions of electrons. But most of them (nearly all for an insulating crystal) take up positions in some pattern of motion each around its own atom—and everything is quite stationary. However, we now want to think about what happens if we put an extra electron in. We will not consider what the other ones are doing because we suppose that to change their motion involves a lot of excitation energy. We are going to add an electron as if to produce one slightly bound negative ion. In watching what the one extra electron does we are making an approximation which disregards the mechanics of the inside workings of the atoms. Of course the electron could then move to another atom, transferring the negative ion to another place. We will suppose that just as in the case of an electron jumping between two protons, the electron can jump from one atom to the neighbor on either side with a certain amplitude. Now how do we describe such a system? What will be reasonable base states? If you remember what we did when we had only two possible positions, you can guess how it will go. Suppose that in our line of atoms the spacings are all equal; and that we number the atoms in sequence, as shown in Fig. 13–1(a). One of the base states is that the electron is at atom number $6$, another base state is that the electron is at atom number $7$, or at atom number $8$, and so on. We can describe the $n$th base state by saying that the electron is at atom number $n$. Let’s say that this is the base state $\ket{n}$. Figure 13–1 shows what we mean by the three base states \begin{equation*} \ket{n-1},\:\ket{n},\:\text{and }\:\ket{n+1}. \end{equation*} Using these base states, any state $\ket{\phi}$ of the electron in our one-dimensional crystal can be described by giving all the amplitudes $\braket{n}{\phi}$ that the state $\ket{\phi}$ is in one of the base states—which means the amplitude that it is located at one particular atom. Then we can write the state $\ket{\phi}$ as a superposition of the base states \begin{equation} \label{Eq:III:13:1} \ket{\phi}=\sum_n\ket{n}\braket{n}{\phi}. \end{equation} Next, we are going to suppose that when the electron is at one atom, there is a certain amplitude that it will leak to the atom on either side. And we’ll take the simplest case for which it can only leak to the nearest neighbors—to get to the next-nearest neighbor, it has to go in two steps. We’ll take that the amplitudes for the electron jump from one atom to the next is $iA/\hbar$ (per unit time). For the moment we would like to write the amplitude $\braket{n}{\phi}$ to be on the $n$th atom as $C_n$. Then Eq. (13.1) will be written \begin{equation} \label{Eq:III:13:2} \ket{\phi}=\sum_n\ket{n}C_n. \end{equation} If we knew each of the amplitudes $C_n$ at a given moment, we could take their absolute squares and get the probability that you would find the electron if you looked at atom $n$ at that time. What will the situation be at some later time? By analogy with the two-state systems we have studied, we would propose that the Hamiltonian equations for this system should be made up of equations like this: \begin{equation} \label{Eq:III:13:3} i\hbar\,\ddt{C_n(t)}{t}=E_0C_n(t)-AC_{n+1}(t)-AC_{n-1}(t). \end{equation} The first coefficient on the right, $E_0$, is, physically, the energy the electron would have if it couldn’t leak away from one of the atoms. (It doesn’t matter what we call $E_0$; as we have seen many times, it represents really nothing but our choice of the zero of energy.) The next term represents the amplitude per unit time that the electron is leaking into the $n$th pit from the $(n+1)$st pit; and the last term is the amplitude for leakage from the $(n-1)$st pit. As usual, we’ll assume that $A$ is a constant (independent of $t$). For a full description of the behavior of any state $\ket{\phi}$, we would have one equation like (13.3) for every one of the amplitudes $C_n$. Since we want to consider a crystal with a very large number of atoms, we’ll assume that there are an indefinitely large number of states—that the atoms go on forever in both directions. (To do the finite case, we will have to pay special attention to what happens at the ends.) If the number $N$ of our base states is indefinitely large, then also our full Hamiltonian equations are infinite in number! We’ll write down just a sample: \begin{equation} \begin{aligned} \vdots\qquad&\qquad\qquad\qquad\qquad\:\:\vdots\\ i\hbar\,\ddt{C_{n-1}}{t}&= E_0C_{n-1}-AC_{n-2}-AC_n,\\[1ex] i\hbar\,\ddt{C_n}{t}&= E_0C_n-AC_{n-1}-AC_{n+1},\\[1ex] i\hbar\,\ddt{C_{n+1}}{t}&= E_0C_{n+1}-AC_n-AC_{n+2},\\[1ex] \vdots\qquad&\qquad\qquad\qquad\qquad\:\:\vdots \end{aligned} \label{Eq:III:13:4} \end{equation} |
|
3 | 13 | Propagation in a Crystal Lattice | 2 | States of definite energy | We could study many things about an electron in a lattice, but first let’s try to find the states of definite energy. As we have seen in earlier chapters this means that we have to find a situation in which the amplitudes all change at the same frequency if they change with time at all. We look for solutions of the form \begin{equation} \label{Eq:III:13:5} C_n=a_ne^{-iEt/\hbar}. \end{equation} The complex numbers $a_n$ tell us about the non-time-varying part of the amplitude to find the electron at the $n$th atom. If we put this trial solution into the equations of (13.4) to test them out, we get the result \begin{equation} \label{Eq:III:13:6} Ea_n=E_0a_n-Aa_{n+1}-Aa_{n-1}. \end{equation} We have an infinite number of such equations for the infinite number of unknowns $a_n$—which is rather petrifying. All we have to do is take the determinant … but wait! Determinants are fine when there are $2$, $3$, or $4$ equations. But if there are a large number—or an infinite number—of equations, the determinants are not very convenient. We’d better just try to solve the equations directly. First, let’s label the atoms by their positions; we’ll say that the atom $n$ is at $x_n$ and the atom $(n+1)$ is at $x_{n+1}$. If the atomic spacing is $b$—as in Fig. 13–1—we will have that $x_{n+1}=x_n+b$. By choosing our origin at atom zero, we can even have it that $x_n=nb$. We can rewrite Eq. (13.5) as \begin{equation} \label{Eq:III:13:7} C_n=a(x_n)e^{-iEt/\hbar}, \end{equation} and Eq. (13.6) would become \begin{equation} \label{Eq:III:13:8} Ea(x_n)=E_0a(x_n)-Aa(x_{n+1})-Aa(x_{n-1}). \end{equation} Or, using the fact that $x_{n+1}=x_n+b$, we could also write \begin{equation} \label{Eq:III:13:9} Ea(x_n)=E_0a(x_n)\!-\!Aa(x_n\!+b)\!-\!Aa(x_n\!-b). \end{equation} This equation is somewhat similar to a differential equation. It tells us that a quantity, $a(x)$, at one point, ($x_n$), is related to the same physical quantity at some neighboring points, ($x_n\pm b$). (A differential equation relates the value of a function at a point to the values at infinitesimally nearby points.) Perhaps the methods we usually use for solving differential equations will also work here; let’s try. Linear differential equations with constant coefficients can always be solved in terms of exponential functions. We can try the same thing here; let’s take as a trial solution \begin{equation} \label{Eq:III:13:10} a(x_n)=e^{ikx_n}. \end{equation} Then Eq. (13.9) becomes \begin{equation} \label{Eq:III:13:11} Ee^{ikx_n}=E_0e^{ikx_n}-Ae^{ik(x_n+b)}-Ae^{ik(x_n-b)}. \end{equation} We can now divide out the common factor $e^{ikx_n}$; we get \begin{equation} \label{Eq:III:13:12} E=E_0-Ae^{ikb}-Ae^{-ikb}. \end{equation} The last two terms are just equal to $(2A\cos kb)$, so \begin{equation} \label{Eq:III:13:13} E=E_0-2A\cos kb. \end{equation} We have found that for any choice at all for the constant $k$ there is a solution whose energy is given by this equation. There are various possible energies depending on $k$, and each $k$ corresponds to a different solution. There are an infinite number of solutions—which is not surprising, since we started out with an infinite number of base states. Let’s see what these solutions mean. For each $k$, the $a$’s are given by Eq. (13.10). The amplitudes $C_n$ are then given by \begin{equation} \label{Eq:III:13:14} C_n=e^{ikx_n}e^{-(i/\hbar)Et}, \end{equation} where you should remember that the energy $E$ also depends on $k$ as given in Eq. (13.13). The space dependence of the amplitudes is $e^{ikx_n}$. The amplitudes oscillate as we go along from one atom to the next. We mean that, in space, the amplitude goes as a complex oscillation—the magnitude is the same at every atom, but the phase at a given time advances by the amount $(ikb)$ from one atom to the next. We can visualize what is going on by plotting a vertical line to show just the real part at each atom as we have done in Fig. 13–2. The envelope of these vertical lines (as shown by the broken-line curve) is, of course, a cosine curve. The imaginary part of $C_n$ is also an oscillating function, but is shifted $90^\circ$ in phase so that the absolute square (which is the sum of the squares of the real and imaginary parts) is the same for all the $C$’s. Thus if we pick a $k$, we get a stationary state of a particular energy $E$. And for any such state, the electron is equally likely to be found at every atom—there is no preference for one atom or the other. Only the phase is different for different atoms. Also, as time goes on the phases vary. From Eq. (13.14) the real and imaginary parts propagate along the crystal as waves—namely as the real or imaginary parts of \begin{equation} \label{Eq:III:13:15} e^{i[kx_n-(E/\hbar)t]}. \end{equation} The wave can travel toward positive or negative $x$ depending on the sign we have picked for $k$. Notice that we have been assuming that the number $k$ that we put in our trial solution, Eq. (13.10), was a real number. We can see now why that must be so if we have an infinite line of atoms. Suppose that $k$ were an imaginary number, say $ik'$. Then the amplitudes $a_n$ would go as $e^{-k'x_n}$, which means that the amplitude would get larger and larger as we go toward large negative $x$’s—or toward large positive $x$’s if $k'$ is a negative number. This kind of solution would be O.K. if we were dealing with a line of atoms that ended, but cannot be a physical solution for an infinite chain of atoms. It would give infinite amplitudes—and, therefore, infinite probabilities—which can’t represent a real situation. Later on we will see an example in which an imaginary $k$ does make sense. The relation between the energy $E$ and the wave number $k$ as given in Eq. (13.13) is plotted in Fig. 13–3. As you can see from the figure, the energy can go from $(E_0-2A)$ at $k=0$ to $(E_0+2A)$ at $k=\pm\pi/b$. The graph is plotted for positive $A$; if $A$ were negative, the curve would simply be inverted, but the range would be the same. The significant result is that any energy is possible within a certain range or “band” of energies, but no others. According to our assumptions, if an electron in a crystal is in a stationary state, it can have no energy other than values in this band. According to Eq. (13.13), the smallest $k$’s correspond to low-energy states—$E\approx(E_0-2A)$. As $k$ increases in magnitude (toward either positive or negative values) the energy at first increases, but then reaches a maximum at $k=\pm\pi/b$, as shown in Fig. 13–3. For $k$’s larger than $\pi/b$, the energy would start to decrease again. But we do not really need to consider such values of $k$, because they do not give new states—they just repeat states we already have for smaller $k$. We can see that in the following way. Consider the lowest energy state for which $k=0$. The coefficient $a(x_n)$ is the same for all $x_n$. Now we would get the same energy for $k=2\pi/b$. But then, using Eq. (13.10), we have that \begin{equation*} a(x_n)=e^{i(2\pi/b)x_n}. \end{equation*} However, taking $x_0$ to be at the origin, we can set $x_n=nb$; then $a(x_n)$ becomes \begin{equation*} a(x_n)=e^{i2\pi n}=1. \end{equation*} The state described by these $a(x_n)$ is physically the same state we got for $k=0$. It does not represent a different solution. As another example, suppose that $k$ were $-\pi/4b$. The real part of $a(x_n)$ would vary as shown by curve $1$ in Fig. 13–4. If $k$ were $7\pi/4b$, the real part of $a(x_n)$ would vary as shown by curve $2$ in the figure. (The complete cosine curves don’t mean anything, of course; all that matters is their values at the points $x_n$. The curves are just to help you see how things are going.) You see that both values of $k$ give the same amplitudes at all of the $x_n$’s. The upshot is that we have all the possible solutions of our problem if we take only $k$’s in a certain limited range. We’ll pick the range between $-\pi/b$ and $+\pi/b$—the one shown in Fig. 13–3. In this range, the energy of the stationary states increases uniformly with an increase in the magnitude of $k$. One side remark about something you can play with. Suppose that the electron cannot only jump to the nearest neighbor with amplitude $iA/\hbar$, but also has the possibility to jump in one direct leap to the next nearest neighbor with some other amplitude $iB/\hbar$. You will find that the solution can again be written in the form $a_n=e^{ikx_n}$—this type of solution is universal. You will also find that the stationary states with wave number $k$ have an energy equal to $(E_0-2A\cos kb-2B\cos 2kb)$. This shows that the shape of the curve of $E$ against $k$ is not universal, but depends upon the particular assumptions of the problem. It is not always a cosine wave—it’s not even necessarily symmetrical about some horizontal line. It is true, however, that the curve always repeats itself outside of the interval from $-\pi/b$ to $\pi/b$, so you never need to worry about other values of $k$. Let’s look a little more closely at what happens for small $k$—that is, when the variations of the amplitudes from one $x_n$ to the next are quite slow. Suppose we choose our zero of energy by defining $E_0=2A$; then the minimum of the curve in Fig. 13–3 is at the zero of energy. For small enough $k$, we can write that \begin{equation*} \cos kb\approx1-k^2b^2/2, \end{equation*} and the energy of Eq. (13.13) becomes \begin{equation} \label{Eq:III:13:16} E=Ak^2b^2. \end{equation} We have that the energy of the state is proportional to the square of the wave number which describes the spatial variations of the amplitudes $C_n$. |
|
3 | 13 | Propagation in a Crystal Lattice | 3 | Time-dependent states | In this section we would like to discuss the behavior of states in the one-dimensional lattice in more detail. If the amplitude for an electron to be at $x_n$ is $C_n$, the probability of finding it there is $\abs{C_n}^2$. For the stationary states described by Eq. (13.14), this probability is the same for all $x_n$ and does not change with time. How can we represent a situation which we would describe roughly by saying an electron of a certain energy is localized in a certain region—so that it is more likely to be found at one place than at some other place? We can do that by making a superposition of several solutions like Eq. (13.14) with slightly different values of $k$—and, therefore, slightly different energies. Then at $t=0$, at least, the amplitude $C_n$ will vary with position because of the interference between the various terms, just as one gets beats when there is a mixture of waves of different wavelengths (as we discussed in Chapter 48, Vol. I). So we can make up a “wave packet” with a predominant wave number $k_0$, but with various other wave numbers near $k_0$.1 In our superposition of stationary states, the amplitudes with different $k$’s will represent states of slightly different energies, and, therefore, of slightly different frequencies; the interference pattern of the total $C_n$ will, therefore, also vary with time—there will be a pattern of “beats.” As we have seen in Chapter 48 of Volume I, the peaks of the beats [the place where $\abs{C(x_n)}^2$ is large] will move along in $x$ as time goes on; they move with the speed we have called the “group velocity.” We found that this group velocity was related to the variation of $k$ with frequency by \begin{equation} \label{Eq:III:13:17} v_{\text{group}}=\ddt{\omega}{k}; \end{equation} the same derivation would apply equally well here. An electron state which is a “clump”—namely one for which the $C_n$ vary in space like the wave packet of Fig. 13–5—will move along our one-dimensional “crystal” with the speed $v$ equal to $d\omega/dk$, where $\omega=E/\hbar$. Using (13.16) for $E$, we get that \begin{equation} \label{Eq:III:13:18} v=\frac{2Ab^2}{\hbar}\,k. \end{equation} In other words, the electrons move along with a speed proportional to the typical $k$. Equation (13.16) then says that the energy of such an electron is proportional to the square of its velocity—it acts like a classical particle. So long as we look at things on a scale gross enough that we don’t see the fine structure, our quantum mechanical picture begins to give results like classical physics. In fact, if we solve Eq. (13.18) for $k$ and substitute into (13.16), we can write \begin{equation} \label{Eq:III:13:19} E=\tfrac{1}{2}m_{\text{eff}}\,v^2, \end{equation} where $m_{\text{eff}}$ is a constant. The extra “energy of motion” of the electron in a packet depends on the velocity just as for a classical particle. The constant $m_{\text{eff}}$—called the “effective mass”—is given by \begin{equation} \label{Eq:III:13:20} m_{\text{eff}}=\frac{\hbar^2}{2Ab^2}. \end{equation} Also notice that we can write \begin{equation} \label{Eq:III:13:21} m_{\text{eff}}\,v=\hbar k. \end{equation} If we choose to call $m_{\text{eff}}\,v$ the “momentum,” it is related to the wave number $k$ in the way we have described earlier for a free particle. Don’t forget that $m_{\text{eff}}$ has nothing to do with the real mass of an electron. It may be quite different—although in commonly used metals and semiconductors it often happens to turn out to be the same general order of magnitude, about $0.1$ to $30$ times the free-space mass of the electron. We have now explained a remarkable mystery—how an electron in a crystal (like an extra electron put into germanium) can ride right through the crystal and flow perfectly freely even though it has to hit all the atoms. It does so by having its amplitudes going pip-pip-pip from one atom to the next, working its way through the crystal. That is how a solid can conduct electricity. |
|
3 | 13 | Propagation in a Crystal Lattice | 4 | An electron in a three-dimensional lattice | Let’s look for a moment at how we could apply the same ideas to see what happens to an electron in three dimensions. The results turn out to be very similar. Suppose we have a rectangular lattice of atoms with lattice spacings of $a$, $b$, $c$ in the three directions. (If you want a cubic lattice, take the three spacings all equal.) Also suppose that the amplitude to leap in the $x$-direction to a neighbor is $(iA_x/\hbar)$, to leap in the $y$-direction is $(iA_y/\hbar)$, and to leap in the $z$-direction is $(iA_z/\hbar)$. Now how should we describe the base states? As in the one-dimensional case, one base state is that the electron is at the atom whose locations are $x$, $y$, $z$, where $(x,y,z)$ is one of the lattice points. Choosing our origin at one atom, these points are all at \begin{equation*} x=n_xa,\quad y=n_yb,\quad \text{and}\quad z=n_zc, \end{equation*} where $n_x$, $n_y$, $n_z$ are any three integers. Instead of using subscripts to indicate such points, we will now just use $x$, $y$, and $z$, understanding that they take on only their values at the lattice points. Thus the base state is represented by the symbol $\ket{\text{electron at \(x,y,z\)}}$, and the amplitude for an electron in some state $\ket{\psi}$ to be in this base state is $C(x,y,z)=\braket{\text{electron at \(x,y,z\)}}{\psi}$. As before, the amplitudes $C(x,y,z)$ may vary with time. With our assumptions, the Hamiltonian equations should be like this: \begin{align} i\hbar\dfrac{dC(x,y,z)}{dt}=E_0C(x,y,z) &-A_xC(x+a,y,z)-A_xC(x-a,y,z)\notag\\[-1pt] &-A_yC(x,y+b,z)-A_yC(x,y-b,z)\notag\\[4pt] \label{Eq:III:13:22} &-A_zC(x,y,z+c)-A_zC(x,y,z-c). \end{align}
\begin{align} i\hbar\dfrac{dC(x,y,z)}{dt}&=E_0C(x,y,z)\notag\\[-.3ex] -&A_xC(x\!+\!a,y,z)\!-\!A_xC(x\!-\!a,y,z)\notag\\[1ex] -&A_yC(x,y\!+\!b,z)\!-\!A_yC(x,y\!-\!b,z)\notag\\[.9ex] \label{Eq:III:13:22} -&A_zC(x,y,z\!+c\!)\!-\!A_zC(x,y,z\!-\!c). \end{align} It looks rather long, but you can see where each term comes from. Again we can try to find a stationary state in which all the $C$’s vary with time in the same way. Again the solution is an exponential: \begin{equation} \label{Eq:III:13:23} C(x,y,z)=e^{-iEt/\hbar}e^{i(k_xx+k_yy+k_zz)}. \end{equation} If you substitute this into (13.22) you see that it works, provided that the energy $E$ is related to $k_x$, $k_y$, and $k_z$ in the following way: \begin{equation} \label{Eq:III:13:24} E=E_0\!-\!2A_x\!\cos k_xa\!-\!2A_y\!\cos k_yb\!-\!2A_z\!\cos k_zc. \end{equation} The energy now depends on the three wave numbers $k_x$, $k_y$, $k_z$, which, incidentally, are the components of a three-dimensional vector $\FLPk$. In fact, we can write Eq. (13.23) in vector notation as \begin{equation} \label{Eq:III:13:25} C(x,y,z)=e^{-iEt/\hbar}e^{i\FLPk\cdot\FLPr}. \end{equation} The amplitude varies as a complex plane wave in three dimensions, moving in the direction of $\FLPk$, and with the wave number $k=(k_x^2+k_y^2+k_z^2)^{1/2}$. The energy associated with these stationary states depends on the three components of $\FLPk$ in the complicated way given in Eq. (13.24). The nature of the variation of $E$ with $\FLPk$ depends on relative signs and magnitudes of $A_x$, $A_y$, and $A_z$. If these three numbers are all positive, and if we are interested in small values of $k$, the dependence is relatively simple. Expanding the cosines as we did before to get Eq. (13.16), we can now get that \begin{equation} \label{Eq:III:13:26} E=E_{\text{min}}+A_xa^2k_x^2+A_yb^2k_y^2+A_zc^2k_z^2. \end{equation} For a simple cubic lattice with lattice spacing $a$ we expect that $A_x$ and $A_y$ and $A_z$ would be equal—say all are just $A$—and we would have just \begin{equation} E=E_{\text{min}}+Aa^2(k_x^2+k_y^2+k_z^2),\notag \end{equation} or \begin{equation} \label{Eq:III:13:27} E=E_{\text{min}}+Aa^2k^2. \end{equation} This is just like Eq. (13.16). Following the arguments used there, we would conclude that an electron packet in three dimensions (made up by superposing many states with nearly equal energies) also moves like a classical particle with some effective mass. In a crystal with a lower symmetry than cubic (or even in a cubic crystal in which the state of the electron at each atom is not symmetrical) the three coefficients $A_x$, $A_y$, and $A_z$ are different. Then the “effective mass” of an electron localized in a small region depends on its direction of motion. It could, for instance, have a different inertia for motion in the $x$-direction than for motion in the $y$-direction. (The details of such a situation are sometimes described in terms of an “effective mass tensor.”) |
|
3 | 13 | Propagation in a Crystal Lattice | 5 | Other states in a lattice | According to Eq. (13.24) the electron states we have been talking about can have energies only in a certain “band” of energies which covers the energy range from the minimum energy \begin{equation*} E_0-2(A_x+A_y+A_z)\hphantom{.} \end{equation*} to the maximum energy \begin{equation*} E_0+2(A_x+A_y+A_z). \end{equation*} Other energies are possible, but they belong to a different class of electron states. For the states we have described, we imagined base states in which an electron is placed on an atom of the crystal in some particular state, say the lowest energy state. If you have an atom in empty space, and add an electron to make an ion, the ion can be formed in many ways. The electron can go on in such a way as to make the state of lowest energy, or it can go on to make one or another of many possible “excited states” of the ion each with a definite energy above the lowest energy. The same thing can happen in a crystal. Let’s suppose that the energy $E_0$ we picked above corresponds to base states which are ions of the lowest possible energy. We could also imagine a new set of base states in which the electron sits near the $n$th atom in a different way—in one of the excited states of the ion—so that the energy $E_0$ is now quite a bit higher. As before there is some amplitude $A$ (different from before) that the electron will jump from its excited state at one atom to the same excited state at a neighboring atom. The whole analysis goes as before; we find a band of possible energies centered at a higher energy. There can, in general, be many such bands each corresponding to a different level of excitation. There are also other possibilities. There may be some amplitude that the electron jumps from an excited condition at one atom to an unexcited condition at the next atom. (This is called an interaction between bands.) The mathematical theory gets more and more complicated as you take into account more and more bands and add more and more coefficients for leakage between the possible states. No new ideas are involved, however; the equations are set up much as we have done in our simple example. We should remark also that there is not much more to be said about the various coefficients, such as the amplitude $A$, which appear in the theory. Generally they are very hard to calculate, so in practical cases very little is known theoretically about these parameters and for any particular real situation we can only take values determined experimentally. There are other situations where the physics and mathematics are almost exactly like what we have found for an electron moving in a crystal, but in which the “object” that moves is quite different. For instance, suppose that our original crystal—or rather linear lattice—was a line of neutral atoms, each with a loosely bound outer electron. Then imagine that we were to remove one electron. Which atom has lost its electron? Let $C_n$ now represent the amplitude that the electron is missing from the atom at $x_n$. There will, in general, be some amplitude $iA/\hbar$ that the electron at a neighboring atom—say the $(n-1)$st atom—will jump to the $n$th leaving the $(n-1)$st atom without its electron. This is the same as saying that there is an amplitude $iA/\hbar$ for the “missing electron” to jump from the $n$th atom to the $(n-1)$st atom. You can see that the equations will be exactly the same—of course, the value of $A$ need not be the same as we had before. Again we will get the same formulas for the energy levels, for the “waves” of probability which move through the crystal with the group velocity of Eq. (13.18), for the effective mass, and so on. Only now the waves describe the behavior of the missing electron—or “hole” as it is called. So a “hole” acts just like a particle with a certain mass $m_{\text{eff}}$. You can see that this particle will appear to have a positive charge. We’ll have some more to say about such holes in the next chapter. As another example, we can think of a line of identical neutral atoms one of which has been put into an excited state—that is, with more than its normal ground state energy. Let $C_n$ be the amplitude that the $n$th atom has the excitation. It can interact with a neighboring atom by handing over to it the extra energy and returning to the ground state. Call the amplitude for this process $iA/\hbar$. You can see that it’s the same mathematics all over again. Now the object which moves is called an exciton. It behaves like a neutral “particle” moving through the crystal, carrying the excitation energy. Such motion may be involved in certain biological processes such as vision, or photosynthesis. It has been guessed that the absorption of light in the retina produces an “exciton” which moves through some periodic structure (such as the layers in the rods we described in Chapter 36, Vol. I; see Fig. 36–5) to be accumulated at some special station where the energy is used to induce a chemical reaction. |
|
3 | 13 | Propagation in a Crystal Lattice | 6 | Scattering from imperfections in the lattice | We want now to consider the case of a single electron in a crystal which is not perfect. Our earlier analysis says that perfect crystals have perfect conductivity—that electrons can go slipping through the crystal, as in a vacuum, without friction. One of the most important things that can stop an electron from going on forever is an imperfection or irregularity in the crystal. As an example, suppose that somewhere in the crystal there is a missing atom; or suppose that someone put one wrong atom at one of the atomic sites so that things there are different than at the other atomic sites. Say the energy $E_0$ or the amplitude $A$ could be different. How would we describe what happens then? To be specific, we will return to the one-dimensional case and we will assume that atom number “zero” is an “impurity” atom and has a different value of $E_0$ than any of the other atoms. Let’s call this energy $(E_0+F)$. What happens? When an electron arrives at atom “zero” there is some probability that the electron is scattered backwards. If a wave packet is moving along and it reaches a place where things are a little bit different, some of it will continue onward and some of it will bounce back. It’s quite difficult to analyze such a situation using a wave packet, because everything varies in time. It is much easier to work with steady-state solutions. So we will work with stationary states, which we will find can be made up of continuous waves which have transmitted and reflected parts. In three dimensions we would call the reflected part the scattered wave, since it would spread out in various directions. We start out with a set of equations which are just like the ones in Eq. (13.6) except that the equation for $n=0$ is different from all the rest. The five equations for $n=-2$, $-1$, $0$, $+1$, and $+2$ look like this: \begin{equation} \begin{aligned} \vdots\quad&\qquad\qquad\qquad\quad\vdots\\ Ea_{-2}&=E_0a_{-2}-Aa_{-1}-Aa_{-3},\\[2pt] Ea_{-1}&=E_0a_{-1}-Aa_0-Aa_{-2},\\[2pt] Ea_0&=(E_0+F)a_0-Aa_1-Aa_{-1},\\[2pt] Ea_1&=E_0a_1-Aa_2-Aa_0,\\[2pt] Ea_2&=E_0a_2-Aa_3-Aa_1,\\ \vdots\quad&\qquad\qquad\qquad\quad\vdots \end{aligned} \label{Eq:III:13:28} \end{equation} There are, of course, all the other equations for $\abs{n}$ is greater than 2. They will look just like Eq. (13.6). For the general case, we really ought to use a different $A$ for the amplitude that the electron jumps to or from atom “zero,” but the main features of what goes on will come out of a simplified example in which all the $A$’s are equal. Equation (13.10) would still work as a solution for all of the equations except the one for atom “zero”—it isn’t right for that one equation. We need a different solution which we can cook up in the following way. Equation (13.10) represents a wave going in the positive $x$-direction. A wave going in the negative $x$-direction would have been an equally good solution. It would be written \begin{equation*} a(x_n)=e^{-ikx_n}. \end{equation*} The most general solution we could have taken for Eq. (13.6) would be a combination of a forward and a backward wave, namely \begin{equation} \label{Eq:III:13:29} a_n=\alpha e^{ikx_n}+\beta e^{-ikx_n}. \end{equation} This solution represents a complex wave of amplitude $\alpha$ moving in the $+x$-direction and a wave of amplitude $\beta$ moving in the $-x$-direction. Now take a look at the set of equations for our new problem—the ones in (13.28) together with those for all the other atoms. The equations involving $a_n$’s with $n\leq-1$ are all satisfied by Eq. (13.29), with the condition that $k$ is related to $E$ and the lattice spacing $b$ by \begin{equation} \label{Eq:III:13:30} E=E_0-2A\cos kb. \end{equation} The physical meaning is an “incident” wave of amplitude $\alpha$ approaching atom “zero” (the “scatterer”) from the left, and a “scattered” or “reflected” wave of amplitude $\beta$ going back toward the left. We do not lose any generality if we set the amplitude $\alpha$ of the incident wave equal to $1$. Then the amplitude $\beta$ is, in general, a complex number. We can say all the same things about the solutions of $a_n$ for $n\geq1$. The coefficients could be different, so we would have for them \begin{equation} \label{Eq:III:13:31} a_n=\gamma e^{ikx_n}+\delta e^{-ikx_n},\quad \text{for}\quad n\geq1. \end{equation} Here, $\gamma$ is the amplitude of a wave going to the right and $\delta$ a wave coming from the right. We want to consider the physical situation in which a wave is originally started only from the left, and there is only a “transmitted” wave that comes out beyond the scatterer—or impurity atom. We will try for a solution in which $\delta=0$. We can, certainly, satisfy all of the equations for the $a_n$ except for the middle three in Eq. (13.28) by the following trial solutions. \begin{align} a_n\:(\text{for $n<0$}) &=e^{ikx_n}+\beta e^{-ikx_n},\notag\\[1ex] \label{Eq:III:13:32} a_n\:(\text{for $n>0$}) &=\gamma e^{ikx_n}. \end{align} The situation we are talking about is illustrated in Fig. 13–6. By using the formulas in Eq. (13.32) for $a_{-1}$ and $a_{+1}$, the three middle equations of Eq. (13.28) will allow us to solve for $a_0$ and also for the two coefficients $\beta$ and $\gamma$. So we have found a complete solution. Setting $x_n=nb$, we have to solve the three equations \begin{equation} \begin{gathered} (E-E_0)\{e^{ik(-b)}+\beta e^{-ik(-b)}\}= -A\{a_0+e^{ik(-2b)}+\beta e^{-ik(-2b)}\},\\[1ex] (E-E_0-F)a_0= -A\{\gamma e^{ikb}+e^{ik(-b)}+\beta e^{-ik(-b)}\},\\[1ex] (E-E_0)\gamma e^{ikb}= -A\{\gamma e^{ik(2b)}+a_0\}. \end{gathered} \label{Eq:III:13:33} \end{equation}
\begin{equation} \begin{aligned} (E\!-\!E_0)\{e^{ik(-b)}\!&+\beta e^{-ik(-b)}\}=\\ &-A\{a_0\!+e^{ik(-2b)}\!+\beta e^{-ik(-2b)}\},\\[1ex] (E\!-\!E_0\!-\!F)a_0=\!&-\!A\{\gamma e^{ikb}\!+e^{ik(-b)}\!+\beta e^{-ik(-b)}\},\\[1ex] (E\!-\!E_0)\gamma e^{ikb}=\!&-\!A\{\gamma e^{ik(2b)}\!+a_0\}. \end{aligned} \label{Eq:III:13:33} \end{equation}
Remember that $E$ is given in terms of $k$ by Eq. (13.30). If you substitute this value for $E$ into the equations, and remember that $\cos x=\tfrac{1}{2}(e^{ix}+e^{-ix})$, you get from the first equation that \begin{equation} \label{Eq:III:13:34} a_0=1+\beta, \end{equation} and from the third equation that \begin{equation} \label{Eq:III:13:35} a_0=\gamma. \end{equation} These are consistent only if \begin{equation} \label{Eq:III:13:36} \gamma=1+\beta. \end{equation} This equation says that the transmitted wave ($\gamma$) is just the original incident wave ($1$) with an added wave ($\beta$) equal to the reflected wave. This is not always true, but happens to be so for a scattering at one atom only. If there were a clump of impurity atoms, the amount added to the forward wave would not necessarily be the same as the reflected wave. We can get the amplitude $\beta$ of the reflected wave from the middle equation of Eq. (13.33); we find that \begin{equation} \label{Eq:III:13:37} \beta=\frac{-F}{F-2iA\sin kb}. \end{equation} We have the complete solution for the lattice with one unusual atom. You may be wondering how the transmitted wave can be “more” than the incident wave as it appears in Eq. (13.34). Remember, though, that $\beta$ and $\gamma$ are complex numbers and that the number of particles (or rather, the probability of finding a particle) in a wave is proportional to the absolute square of the amplitude. In fact, there will be “conservation of electrons” only if \begin{equation} \label{Eq:III:13:38} \abs{\beta}^2+\abs{\gamma}^2=1. \end{equation} You can show that this is true for our solution. |
|
3 | 13 | Propagation in a Crystal Lattice | 7 | Trapping by a lattice imperfection | There is another interesting situation that can arise if $F$ is a negative number. If the energy of the electron is lower at the impurity atom (at $n=0$) than it is anywhere else, then the electron can get caught on this atom. That is, if $(E_0+F)$ is below the bottom of the band at $(E_0-2A)$, then the electron can get “trapped” in a state with $E<E_0-2A$. Such a solution cannot come out of what we have done so far. We can get this solution, however, if we permit the trial solution we took in Eq. (13.10) to have an imaginary number for $k$. Let’s set $k=\pm i\kappa$. Again, we can have different solutions for $n<0$ and for $n>0$. A possible solution for $n<0$ might be \begin{equation} \label{Eq:III:13:39} a_n\:(\text{for $n<0$})=ce^{+\kappa x_n}. \end{equation} We have to take a plus sign in the exponent; otherwise the amplitude would get indefinitely large for large negative values of $n$. Similarly, a possible solution for $n>0$ would be \begin{equation} \label{Eq:III:13:40} a_n\:(\text{for $n>0$})=c'e^{-\kappa x_n}. \end{equation} If we put these trial solutions into Eq. (13.28) all but the middle three are satisfied provided that \begin{equation} \label{Eq:III:13:41} E=E_0-A(e^{\kappa b}+e^{-\kappa b}). \end{equation} Since the sum of the two exponential terms is always greater than $2$, this energy is below the regular band, and is what we are looking for. The remaining three equations in Eq. (13.28) are satisfied if $a_0=c=c'$ and if $\kappa$ is chosen so that \begin{equation} \label{Eq:III:13:42} A(e^{\kappa b}-e^{-\kappa b})=-F. \end{equation} Combining this equation with Eq. (13.41) we can find the energy of the trapped electron; we get \begin{equation} \label{Eq:III:13:43} E=E_0-\sqrt{4A^2+F^2}. \end{equation} The trapped electron has a unique energy—located somewhat below the conduction band. Notice that the amplitudes we have in Eq. (13.39) and (13.40) do not say that the trapped electron sits right on the impurity atom. The probability of finding the electron at nearby atoms is given by the square of these amplitudes. For one particular choice of the parameters it might vary as shown in the bar graph of Fig. 13–7. The probability is greatest for finding the electron on the impurity atom. For nearby atoms the probability drops off exponentially with the distance from the impurity atom. This is another example of “barrier penetration.” From the point-of-view of classical physics the electron doesn’t have enough energy to get away from the energy “hole” at the trapping center. But quantum mechanically it can leak out a little way. |
|
3 | 13 | Propagation in a Crystal Lattice | 8 | Scattering amplitudes and bound states | Finally, our example can be used to illustrate a point which is very useful these days in the physics of high-energy particles. It has to do with a relationship between scattering amplitudes and bound states. Suppose we have discovered—through experiment and theoretical analysis—the way that pions scatter from protons. Then a new particle is discovered and someone wonders whether maybe it is just a combination of a pion and a proton held together in some bound state (in an analogy to the way an electron is bound to a proton to make a hydrogen atom). By a bound state we mean a combination which has a lower energy than the two free-particles. There is a general theory which says that a bound state will exist at that energy at which the scattering amplitude becomes infinite if extrapolated algebraically (the mathematical term is “analytically continued”) to energy regions outside of the permitted band. The physical reason for this is as follows. A bound state is a situation in which there are only waves tied on to a point and there’s no wave coming in to get it started, it just exists there by itself. The relative proportion between the so-called “scattered” or created wave and the wave being “sent in” is infinite. We can test this idea in our example. Let’s write our expression Eq. (13.37) for the scattered amplitude directly in terms of the energy $E$ of the particle being scattered (instead of in terms of $k$). Since Equation (13.30) can be rewritten as \begin{equation*} 2A\sin kb=\sqrt{4A^2-(E-E_0)^2}, \end{equation*} the scattered amplitude is \begin{equation} \label{Eq:III:13:44} \beta=\frac{-F}{F-i\sqrt{4A^2-(E-E_0)^2}}. \end{equation} From our derivation, this equation should be used only for real states—those with energies in the energy band, $E=E_0\pm2A$. But suppose we forget that fact and extend the formula into the “unphysical” energy regions where $\abs{E-E_0}>2A$. For these unphysical regions we can write2 \begin{equation*} \sqrt{4A^2-(E-E_0)^2}=i\sqrt{(E-E_0)^2-4A^2}. \end{equation*} Then the “scattering amplitude,” whatever it may mean, is \begin{equation} \label{Eq:III:13:45} \beta=\frac{-F}{F+\sqrt{(E-E_0)^2-4A^2}}. \end{equation} Now we ask: Is there any energy $E$ for which $\beta$ becomes infinite (i.e., for which the expression for $\beta$ has a “pole”)? Yes, so long as $F$ is negative, the denominator of Eq. (13.45) will be zero when \begin{equation*} (E-E_0)^2-4A^2=F^2, \end{equation*} or when \begin{equation*} E=E_0\pm\sqrt{4A^2+F^2}. \end{equation*} The minus sign gives just the energy we found in Eq. (13.43) for the trapped energy. What about the plus sign? This gives an energy above the allowed energy band. And indeed there is another bound state there which we missed when we solved the equations of Eq. (13.28). We leave it as a puzzle for you to find the energy and amplitudes $a_n$ for this bound state. The relation between scattering and bound states provides one of the most useful clues in the current search for an understanding of the experimental observations about the new strange particles. |
|
3 | 14 | Semiconductors | 1 | Electrons and holes in semiconductors | One of the remarkable and dramatic developments in recent years has been the application of solid state science to technical developments in electrical devices such as transistors. The study of semiconductors led to the discovery of their useful properties and to a large number of practical applications. The field is changing so rapidly that what we tell you today may be incorrect next year. It will certainly be incomplete. And it is perfectly clear that with the continuing study of these materials many new and more wonderful things will be possible as time goes on. You will not need to understand this chapter for what comes later in this volume, but you may find it interesting to see that at least something of what you are learning has some relation to the practical world. There are large numbers of semiconductors known, but we’ll concentrate on those which now have the greatest technical application. They are also the ones that are best understood, and in understanding them we will obtain a degree of understanding of many of the others. The semiconductor substances in most common use today are silicon and germanium. These elements crystallize in the diamond lattice, a kind of cubic structure in which the atoms have tetrahedral bonding with their four nearest neighbors. They are insulators at very low temperatures—near absolute zero—although they do conduct electricity somewhat at room temperature. They are not metals; they are called semiconductors. If we somehow put an extra electron into a crystal of silicon or germanium which is at a low temperature, we will have just the situation we described in the last chapter. The electron will be able to wander around in the crystal jumping from one atomic site to the next. Actually, we have looked only at the behavior of electrons in a rectangular lattice, and the equations would be somewhat different for the real lattice of silicon or germanium. All of the essential points are, however, illustrated by the results for the rectangular lattice. As we saw in Chapter 13, these electrons can have energies only in a certain energy band—called the conduction band. Within this band the energy is related to the wave-number $\FLPk$ of the probability amplitude $C$ (see Eq. (13.24)) by \begin{equation} \label{Eq:III:14:1} E=\!E_0\!-\!2A_x\!\cos k_xa\!-\!2A_y\!\cos k_yb\!-\!2A_z\!\cos k_zc. \end{equation} The $A$’s are the amplitudes for jumping in the $x$-, $y$-, and $z$-directions, and $a$, $b$, and $c$ are the lattice spacings in these directions. For energies near the bottom of the band, we can approximate Eq. (14.1) by \begin{equation} \label{Eq:III:14:2} E=E_{\text{min}}+A_xa^2k_x^2+A_yb^2k_y^2+A_zc^2k_z^2 \end{equation} (see Section 13–4). If we think of electron motion in some particular direction, so that the components of $\FLPk$ are always in the same ratio, the energy is a quadratic function of the wave number—and as we have seen of the momentum of the electron. We can write \begin{equation} \label{Eq:III:14:3} E=E_{\text{min}}+\alpha k^2, \end{equation} where $\alpha$ is some constant, and we can make a graph of $E$ versus $k$ as in Fig. 14–1. We’ll call such a graph an “energy diagram.” An electron in a particular state of energy and momentum can be indicated by a point such as $S$ in the figure. As we also mentioned in Chapter 13, we can have a similar situation if we remove an electron from a neutral insulator. Then, an electron can jump over from a nearby atom and fill the “hole,” but leaving another “hole” at the atom it started from. We can describe this behavior by writing an amplitude to find the hole at any particular atom, and by saying that the hole can jump from one atom to the next. (Clearly, the amplitudes $A$ that the hole jumps from atom $a$ to atom $b$ is just the same as the amplitude that an electron on atom $b$ jumps into the hole at atom $a$.) The mathematics is just the same for the hole as it was for the extra electron, and we get again that the energy of the hole is related to its wave number by an equation just like Eq. (14.1) or (14.2), except, of course, with different numerical values for the amplitudes $A_x$, $A_y$, and $A_z$. The hole has an energy related to the wave number of its probability amplitudes. Its energy lies in a restricted band, and near the bottom of the band its energy varies quadratically with the wave number—or momentum—just as in Fig. 14–1. Following the arguments of Section 13–3, we would find that the hole also behaves like a classical particle with a certain effective mass—except that in noncubic crystals the mass depends on the direction of motion. So the hole behaves like a positive particle moving through the crystal. The charge of the hole-particle is positive, because it is located at the site of a missing electron; and when it moves in one direction there are actually electrons moving in the opposite direction. If we put several electrons into a neutral crystal, they will move around much like the atoms of a low-pressure gas. If there are not too many, their interactions will not be very important. If we then put an electric field across the crystal, the electrons will start to move and an electric current will flow. Eventually they would all be drawn to one edge of the crystal, and, if there is a metal electrode there, they would be collected, leaving the crystal neutral. Similarly we could put many holes into a crystal. They would roam around at random unless there is an electric field. With a field they would flow toward the negative terminal, and would be “collected”—what actually happens is that they are neutralized by electrons from the metal terminal. One can also have both holes and electrons together. If there are not too many, they will all go their way independently. With an electric field, they will all contribute to the current. For obvious reasons, electrons are called the negative carriers and the holes are called the positive carriers. We have so far considered that electrons are put into the crystal from the outside, or are removed to make a hole. It is also possible to “create” an electron-hole pair by taking a bound electron away from one neutral atom and putting it some distance away in the same crystal. We then have a free electron and a free hole, and the two can move about as we have described. The energy required to put an electron into a state $S$—we say to “create” the state $S$—is the energy $E^-$ shown in Fig. 14–2. It is some energy above $E^-_{\text{min}}$. The energy required to “create” a hole in some state $S'$ is the energy $E^+$ of Fig. 14–3, which is some energy greater than $E^+_{\text{min}}$. Now if we create a pair in the states $S$ and $S'$, the energy required is just $E^-+E^+$. The creation of pairs is a common process (as we will see later), so many people like to put Fig. 14–2 and Fig. 14–3 together on the same graph—with the hole energy plotted downward, although it is, of course a positive energy. We have combined our two graphs in this way in Fig. 14–4. The advantage of such a graph is that the energy $E_{\text{pair}}=E^-+E^+$ required to create a pair with the electron in $S$ and the hole in $S'$ is just the vertical distance between $S$ and $S'$ as shown in Fig. 14–4. The minimum energy required to create a pair is called the “gap” energy and is equal to $E^-_{\text{min}}+E^+_{\text{min}}$. Sometimes you will see a simpler diagram called an energy level diagram which is drawn when people are not interested in the $k$ variable. Such a diagram—shown in Fig. 14–5—just shows the possible energies for the electrons and holes.1 How can electron-hole pairs be created? There are several ways. For example, photons of light (or x-rays) can be absorbed and create a pair if the photon energy is above the energy of the gap. The rate at which pairs are produced is proportional to the light intensity. If two electrodes are plated on a wafer of the crystal and a “bias” voltage is applied, the electrons and holes will be drawn to the electrodes. The circuit current will be proportional to the intensity of the light. This mechanism is responsible for the phenomenon of photoconductivity and the operation of photoconductive cells. Electron hole pairs can also be produced by high-energy particles. When a fast-moving charged particle—for instance, a proton or a pion with an energy of tens or hundreds of MeV—goes through a crystal, its electric field will knock electrons out of their bound states creating electron-hole pairs. Such events occur hundreds of thousands of times per millimeter of track. After the passage of the particle, the carriers can be collected and in doing so will give an electrical pulse. This is the mechanism at play in the semiconductor counters recently put to use for experiments in nuclear physics. Such counters do not require semiconductors, they can also be made with crystalline insulators. In fact, the first of such counters was made using a diamond crystal which is an insulator at room temperature. Very pure crystals are required if the holes and electrons are to be able to move freely to the electrodes without being trapped. The semiconductors silicon and germanium are used because they can be produced with high purity in reasonable large sizes (centimeter dimensions). So far we have been concerned with semiconductor crystals at temperatures near absolute zero. At any finite temperature there is still another mechanism by which electron-hole pairs can be created. The pair energy can be provided from the thermal energy of the crystal. The thermal vibrations of the crystal can transfer their energy to a pair—giving rise to “spontaneous” creation. The probability per unit time that the energy as large as the gap energy $E_{\text{gap}}$ will be concentrated at one atomic site is proportional to $e^{-E_{\text{gap}}/\kappa T}$, where $T$ is the temperature and $\kappa$ is Boltzmann’s constant (see Chapter 40, Vol. I). Near absolute zero there is no appreciable probability, but as the temperature rises there is an increasing probability of producing such pairs. At any finite temperature the production should continue forever at a constant rate giving more and more negative and positive carriers. Of course that does not happen because after a while the electrons and holes accidentally find each other—the electron drops into the hole and the excess energy is given to the lattice. We say that the electron and hole “annihilate.” There is a certain probability per second that a hole meets an electron and the two things annihilate each other. If the number of electrons per unit volume is $N_n$ ($n$ for negative carriers) and the density of positive carriers is $N_p$, the chance per unit time that an electron and a hole will find each other and annihilate is proportional to the product $N_nN_p$. In equilibrium this rate must equal the rate that pairs are created. You see that in equilibrium the product of $N_n$ and $N_p$ should be given by some constant times the Boltzmann factor: \begin{equation} \label{Eq:III:14:4} N_nN_p=\text{const}\,e^{-E_{\text{gap}}/\kappa T}. \end{equation} When we say constant, we mean nearly constant. A more complete theory—which includes more details about how holes and electrons “find” each other—shows that the “constant” is slightly dependent upon temperature, but the major dependence on temperature is in the exponential. Let’s consider, as an example, a pure material which is originally neutral. At a finite temperature you would expect the number of positive and negative carriers to be equal, $N_n=N_p$. Then each of them should vary with temperature as $e^{-E_{\text{gap}}/2\kappa T}$. The variation of many of the properties of a semiconductor—the conductivity for example—is mainly determined by the exponential factor because all the other factors vary much more slowly with temperature. The gap energy for germanium is about $0.72$ eV and for silicon $1.1$ eV. At room temperature $\kappa T$ is about $1/40$ of an electron volt. At these temperatures there are enough holes and electrons to give a significant conductivity, while at, say, $30^\circ$K—one-tenth of room temperature—the conductivity is imperceptible. The gap energy of diamond is $6$ or $7$ eV and diamond is a good insulator at room temperature. |
|
3 | 14 | Semiconductors | 2 | Impure semiconductors | So far we have talked about two ways that extra electrons can be put into an otherwise ideally perfect crystal lattice. One way was to inject the electron from an outside source; the other way, was to knock a bound electron off a neutral atom creating simultaneously an electron and a hole. It is possible to put electrons into the conduction band of a crystal in still another way. Suppose we imagine a crystal of germanium in which one of the germanium atoms is replaced by an arsenic atom. The germanium atoms have a valence of $4$ and the crystal structure is controlled by the four valence electrons. Arsenic, on the other hand, has a valence of $5$. It turns out that a single arsenic atom can sit in the germanium lattice (because it has approximately the correct size), but in doing so it must act as a valence $4$ atom—using four of its valence electrons to form the crystal bonds and having one electron left over. This extra electron is very loosely attached—the binding energy is only about $1/100$ of an electron volt. At room temperature the electron easily picks up that much energy from the thermal energy of the crystal, and then takes off on its own—moving about in the lattice as a free electron. An impurity atom such as the arsenic is called a donor site because it can give up a negative carrier to the crystal. If a crystal of germanium is grown from a melt to which a very small amount of arsenic has been added, the arsenic donor sites will be distributed throughout the crystal and the crystal will have a certain density of negative carriers built in. You might think that these carriers would get swept away as soon as any small electric field was put across the crystal. This will not happen, however, because the arsenic atoms in the body of the crystal each have a positive charge. If the body of the crystal is to remain neutral, the average density of negative carrier electrons must be equal to the density of donor sites. If you put two electrodes on the edges of such a crystal and connect them to a battery, a current will flow; but as the carrier electrons are swept out at one end, new conduction electrons must be introduced from the electrode on the other end so that the average density of conduction electrons is left very nearly equal to the density of donor sites. Since the donor sites are positively charged, there will be some tendency for them to capture some of the conduction electrons as they diffuse around inside the crystal. A donor site can, therefore, act as a trap such as those we discussed in the last section. But if the trapping energy is sufficiently small—as it is for arsenic—the number of carriers which are trapped at any one time is a small fraction of the total. For a complete understanding of the behavior of semiconductors one must take into account this trapping. For the rest of our discussion, however, we will assume that the trapping energy is sufficiently low and the temperature is sufficiently high, that all of the donor sites have given up their electrons. This is, of course, just an approximation. It is also possible to build into a germanium crystal some impurity atom whose valence is $3$, such as aluminum. The aluminum atom tries to act as a valence $4$ object by stealing an extra electron. It can steal an electron from some nearby germanium atom and end up as a negatively charged atom with an effective valence of $4$. Of course, when it steals the electron from a germanium atom, it leaves a hole there; and this hole can wander around in the crystal as a positive carrier. An impurity atom which can produce a hole in this way is called an acceptor because it “accepts” an electron. If a germanium or a silicon crystal is grown from a melt to which a small amount of aluminum impurity has been added, the crystal will have built-in a certain density of holes which can act as positive carriers. When a donor or an acceptor impurity is added to a semiconductor, we say that the material has been “doped.” When a germanium crystal with some built-in donor impurities is at room temperature, some conduction electrons are contributed by the thermally induced electron-hole pair creation as well as by the donor sites. The electrons from both sources are, naturally, equivalent, and it is the total number $N_n$ which comes into play in the statistical processes that lead to equilibrium. If the temperature is not too low, the number of negative carriers contributed by the donor impurity atoms is roughly equal to the number of impurity atoms present. In equilibrium Eq. (14.4) must still be valid; at a given temperature the product $N_nN_p$ is determined. This means that if we add some donor impurity which increases $N_n$, the number $N_p$ of positive carriers will have to decrease by such an amount that $N_nN_p$ is unchanged. If the impurity concentration is high enough, the number $N_n$ of negative carriers is determined by the number of donor sites and is nearly independent of temperature—all of the variation in the exponential factor is supplied by $N_p$, even though it is much less than $N_n$. An otherwise pure crystal with a small concentration of donor impurity will have a majority of negative carriers; such a material is called an “$n$-type” semiconductor. If an acceptor-type impurity is added to the crystal lattice, some of the new holes will drift around and annihilate some of the free electrons produced by thermal fluctuation. This process will go on until Eq. (14.4) is satisfied. Under equilibrium conditions the number of positive carriers will be increased and the number of negative carriers will be decreased, leaving the product a constant. A material with an excess of positive carriers is called a “$p$-type” semiconductor. If we put two electrodes on a piece of semiconductor crystal and connect them to a source of potential difference, there will be an electric field inside the crystal. The electric field will cause the positive and the negative carriers to move, and an electric current will flow. Let’s consider first what will happen in an $n$-type material in which there is a large majority of negative carriers. For such material we can disregard the holes; they will contribute very little to the current because there are so few of them. In an ideal crystal the carriers would move across without any impediment. In a real crystal at a finite temperature, however,—especially in a crystal with some impurities—the electrons do not move completely freely. They are continually making collisions which knock them out of their original trajectories, that is, changing their momentum. These collisions are just exactly the scatterings we talked about in the last chapter and occur at any irregularity in the crystal lattice. In an $n$-type material the main causes of scattering are the very donor sites that are producing the carriers. Since the conduction electrons have a very slightly different energy at the donor sites, the probability waves are scattered from that point. Even in a perfectly pure crystal, however, there are (at any finite temperature) irregularities in the lattice due to thermal vibrations. From the classical point of view we can say that the atoms aren’t lined up exactly on a regular lattice, but are, at any instant, slightly out of place due to their thermal vibrations. The energy $E_0$ associated with each lattice point in the theory we described in Chapter 13 varies a little bit from place to place so that the waves of probability amplitude are not transmitted perfectly but are scattered in an irregular fashion. At very high temperatures or for very pure materials this scattering may become important, but in most doped materials used in practical devices the impurity atoms contribute most of the scattering. We would like now to make an estimate of the electrical conductivity of such a material. When an electric field is applied to an $n$-type semiconductor, each negative carrier will be accelerated in this field, picking up velocity until it is scattered from one of the donor sites. This means that the carriers which are ordinarily moving about in a random fashion with their thermal energies will pick up an average drift velocity along the lines of the electric field and give rise to a current through the crystal. The drift velocity is in general rather small compared with the typical thermal velocities so that we can estimate the current by assuming that the average time that the carrier travels between scatterings is a constant. Let’s say that the negative carrier has an effective electric charge $q_n$. In an electric field $\Efieldvec$, the force on the carrier will be $q_n\Efieldvec$. In Section 43–3 of Volume I we calculated the average drift velocity under such circumstances and found that it is given by $F\tau/m$, where $F$ is the force on the charge, $\tau$ is the mean free time between collisions, and $m$ is the mass. We should use the effective mass we calculated in the last chapter but since we want to make a rough calculation we will suppose that this effective mass is the same in all directions. Here we will call it $m_n$. With this approximation the average drift velocity will be \begin{equation} \label{Eq:III:14:5} \FLPv_{\text{drift}}=\frac{q_n\Efieldvec\tau_n}{m_n}. \end{equation} Knowing the drift velocity we can find the current. Electric current density $\FLPj$ is just the number of carriers per unit volume, $N_n$, multiplied by the average drift velocity, and by the charge on each carrier. The current density is therefore \begin{equation} \label{Eq:III:14:6} \FLPj=N_n\FLPv_{\text{drift}}q_n= \frac{N_nq_n^2\tau_n}{m_n}\,\Efieldvec. \end{equation} We see that the current density is proportional to the electric field; such a semiconductor material obeys Ohm’s law. The coefficient of proportionality between $\FLPj$ and $\Efieldvec$, the conductivity $\sigma$, is \begin{equation} \label{Eq:III:14:7} \sigma=\frac{N_nq_n^2\tau_n}{m_n}. \end{equation} For an $n$-type material the conductivity is relatively independent of temperature. First, the number of majority carriers $N_n$ is determined primarily by the density of donors in the crystal (so long as the temperature is not so low that too many of the carriers are trapped). Second, the mean time between collisions $\tau_n$ is mainly controlled by the density of impurity atoms, which is, of course, independent of the temperature. We can apply all the same arguments to a $p$-type material, changing only the values of the parameters which appear in Eq. (14.7). If there are comparable numbers of both negative and positive carriers present at the same time, we must add the contributions from each kind of carrier. The total conductivity will be given by \begin{equation} \label{Eq:III:14:8} \sigma=\frac{N_nq_n^2\tau_n}{m_n}+\frac{N_pq_p^2\tau_p}{m_p}. \end{equation} For very pure materials, $N_p$ and $N_n$ will be nearly equal. They will be smaller than in a doped material, so the conductivity will be less. Also they will vary rapidly with temperature (like $e^{-E_{\text{gap}}/2\kappa T}$, as we have seen), so the conductivity may change extremely fast with temperature. |
|
3 | 14 | Semiconductors | 3 | The Hall effect | It is certainly a peculiar thing that in a substance where the only relatively free objects are electrons, there should be an electrical current carried by holes that behave like positive particles. We would like, therefore, to describe an experiment that shows in a rather clear way that the sign of the carrier of electric current is quite definitely positive. Suppose we have a block made of semiconductor material—it could also be a metal—and we put an electric field on it so as to draw a current in some direction, say the horizontal direction as drawn in Fig. 14–6. Now suppose we put a magnetic field on the block pointing at a right angle to the current, say into the plane of the figure. The moving carriers will feel a magnetic force $q(\FLPv\times\FLPB)$. And since the average drift velocity is either right or left—depending on the sign of the charge on the carrier—the average magnetic force on the carriers will be either up or down. No, that is not right! For the directions we have assumed for the current and the magnetic field the magnetic force on the moving charges will always be up. Positive charges moving in the direction of $\FLPj$ (to the right) will feel an upward force. If the current is carried by negative charges, they will be moving left (for the same sign of the conduction current) and they will also feel an upward force. Under steady conditions, however, there is no upward motion of the carriers because the current can flow only from left to right. What happens is that a few of the charges initially flow upward, producing a surface charge density along the upper surface of semiconductor—leaving an equal and opposite surface charge density along the bottom surface of the crystal. The charges pile up on the top and bottom surfaces until the electric forces they produce on the moving charges just exactly cancel the magnetic force (on the average) so that the steady current flows horizontally. The charges on the top and bottom surfaces will produce a potential difference vertically across the crystal which can be measured with a high-resistance voltmeter, as shown in Fig. 14–7. The sign of the potential difference registered by the voltmeter will depend on the sign of the carrier charges responsible for the current. When such experiments were first done it was expected that the sign of the potential difference would be negative as one would expect for negative conduction electrons. People were, therefore, quite surprised to find that for some materials the sign of the potential difference was in the opposite direction. It appeared that the current carrier was a particle with a positive charge. From our discussion of doped semiconductors it is understandable that an $n$-type semiconductor should produce the sign of potential difference appropriate to negative carriers, and that a $p$-type semiconductor should give an opposite potential difference, since the current is carried by the positively charged holes. The original discovery of the anomalous sign of the potential difference in the Hall effect was made in a metal rather than a semiconductor. It had been assumed that in metals the conduction was always by electron; however, it was found out that for beryllium the potential difference had the wrong sign. It is now understood that in metals as well as in semiconductors it is possible, in certain circumstances, that the “objects” responsible for the conduction are holes. Although it is ultimately the electrons in the crystal which do the moving, nevertheless, the relationship of the momentum and the energy, and the response to external fields is exactly what one would expect for an electric current carried by positive particles. Let’s see if we can make a quantitative estimate of the magnitude of the voltage difference expected from the Hall effect. If the voltmeter in Fig. 14–7 draws a negligible current, then the charges inside the semiconductor must be moving from left to right and the vertical magnetic force must be precisely cancelled by a vertical electric field which we will call $\Efieldvec_{\text{tr}}$ (the “tr” is for “transverse”). If this electric field is to cancel the magnetic forces, we must have \begin{equation} \label{Eq:III:14:9} \Efieldvec_{\text{tr}}=-\FLPv_{\text{drift}}\times\FLPB. \end{equation} Using the relation between the drift velocity and the electric current density given in Eq. (14.6), we get \begin{equation*} \Efield_{\text{tr}}=-\frac{1}{qN}\,jB. \end{equation*} The potential difference between the top and the bottom of the crystal is, of course, this electric field strength multiplied by the height of the crystal. The electric field strength $\Efield_{\text{tr}}$ in the crystal is proportional to the current density and to the magnetic field strength. The constant of proportionality $1/qN$ is called the Hall coefficient and is usually represented by the symbol $R_{\text{H}}$. The Hall coefficient depends just on the density of carriers—provided that carriers of one sign are in a large majority. Measurement of the Hall effect is, therefore, one convenient way of determining experimentally the density of carriers in a semiconductor. |
|
3 | 14 | Semiconductors | 4 | Semiconductor junctions | We would like to discuss now what happens if we take two pieces of germanium or silicon with different internal characteristics—say different kinds or amounts of doping—and put them together to make a “junction.” Let’s start out with what is called a $p$-$n$ junction in which we have $p$-type germanium on one side of the boundary and $n$-type germanium on the other side of the boundary—as sketched in Fig. 14–8. Actually, it is not practical to put together two separate pieces of crystal and have them in uniform contact on an atomic scale. Instead, junctions are made out of a single crystal which has been modified in the two separate regions. One way is to add some suitable doping impurity to the “melt” after only half of the crystal has grown. Another way is to paint a little of the impurity element on the surface and then heat the crystal causing some impurity atoms to diffuse into the body of the crystal. Junctions made in these ways do not have a sharp boundary, although the boundaries can be made as thin as $10^{-4}$ centimeters or so. For our discussions we will imagine an ideal situation in which these two regions of the crystal with different properties meet at a sharp boundary. On the $n$-type side of the $p$-$n$ junction there are free electrons which can move about, as well as the fixed donor sites which balance the overall electric charge. On the $p$-type side there are free holes moving about and an equal number of negative acceptor sites keeping the charge balanced. Actually, that describes the situation before we put the two materials in contact. Once they are connected together the situation will change near the boundary. When the electrons in the $n$-type material arrive at the boundary they will not be reflected back as they would at a free surface, but are able to go right on into the $p$-type material. Some of the electrons of the $n$-type material will, therefore, tend to diffuse over into the $p$-type material where there are fewer electrons. This cannot go on forever because as we lose electrons from the $n$-side the net positive charge there increases until finally an electric voltage is built up which retards the diffusion of electrons into the $p$-side. In a similar way, the positive carriers of the $p$-type material can diffuse across the junction into the $n$-type material. When they do this they leave behind an excess of negative charge. Under equilibrium conditions the net diffusion current must be zero. This is brought about by the electric fields, which are established in such a way as to draw the positive carriers back toward the $p$-type material. The two diffusion processes we have been describing go on simultaneously and, you will notice, both act in the direction which will charge up the $n$-type material in a positive sense and the $p$-type material in a negative sense. Because of the finite conductivity of the semiconductor material, the change in potential from the $p$-side to the $n$-side will occur in a relatively narrow region near the boundary; the main body of each block of material will have a uniform potential. Let’s imagine an $x$-axis in a direction perpendicular to the boundary surface. Then the electric potential will vary with $x$, as shown in Fig. 14–9(b). We have also shown in part (c) of the figure the expected variation of the density $N_n$ of $n$-carriers and the density $N_p$ of $p$-carriers. Far away from the junction the carrier densities $N_p$ and $N_n$ should be just the equilibrium density we would expect for individual blocks of materials at the same temperature. (We have drawn the figure for a junction in which the $p$-type material is more heavily doped than the $n$-type material.) Because of the potential gradient at the junction, the positive carriers have to climb up a potential hill to get to the $n$-type side. This means that under equilibrium conditions there can be fewer positive carriers in the $n$-type material than there are in the $p$-type material. Remembering the laws of statistical mechanics, we expect that the ratio of $p$-type carriers on the two sides to be given by the following equation: \begin{equation} \label{Eq:III:14:10} \frac{N_p(\text{$n$-side})}{N_p(\text{$p$-side})}= e^{-q_pV/\kappa T}. \end{equation} The product $q_pV$ in the numerator of the exponential is just the energy required to carry a charge of $q_p$ through a potential difference $V$. We have a precisely similar equation for the densities of the $n$-type carriers: \begin{equation} \label{Eq:III:14:11} \frac{N_n(\text{$n$-side})}{N_n(\text{$p$-side})}= e^{-q_nV/\kappa T}. \end{equation} If we know the equilibrium densities in each of the two materials, we can use either of the two equations above to determine the potential difference across the junction. Notice that if Eqs. (14.10) and (14.11) are to give the same value for the potential difference $V$, the product $N_pN_n$ must be the same for the $p$-side as for the $n$-side. (Remember that $q_n=-q_p$.) We have seen earlier, however, that this product depends only on the temperature and the gap energy of the crystal. Provided both sides of the crystal are at the same temperature, the two equations are consistent with the same value of the potential difference. Since there is a potential difference from one side of the junction to the other, it looks something like a battery. Perhaps if we connect a wire from the $n$-type side to the $p$-type side we will get an electrical current. That would be nice because then the current would flow forever without using up any material and we would have an infinite source of energy in violation of the second law of thermodynamics! There is, however, no current if you connect a wire from the $p$-side to the $n$-side. And the reason is easy to see. Suppose we imagine first a wire made out of a piece of undoped material. When we connect this wire to the $n$-type side, we have a junction. There will be a potential difference across this junction. Let’s say that it is just one-half the potential difference from the $p$-type material to the $n$-type material. When we connect our undoped wire to the $p$-type side of the junction, there is also a potential difference at this junction—again, one-half the potential drop across the $p$-$n$ junction. At all the junctions the potential differences adjust themselves so that there is no net current flow in the circuit. Whatever kind of wire you use to connect together the two sides of the $p$-$n$ junction, you are producing two new junctions, and so long as all the junctions are at the same temperature, the potential jumps at the junctions all compensate each other and no current will flow in the circuit. It does turn out, however—if you work out the details—that if some of the junctions are at a different temperature than the other junctions, currents will flow. Some of the junctions will be heated and others will be cooled by this current and thermal energy will be converted into electrical energy. This effect is responsible for the operation of thermocouples which are used for measuring temperatures, and of thermoelectric generators. The same effect is also used to make small refrigerators. If we cannot measure the potential difference between the two sides of an $p$-$n$ junction, how can we really be sure that the potential gradient shown in Fig. 14–9 really exists? One way is to shine light on the junction. When the light photons are absorbed they can produce an electron-hole pair. In the strong electric field that exists at the junction (equal to the slope of the potential curve of Fig. 14–9) the hole will be driven into the $p$-type region and the electron will be driven into the $n$-type region. If the two sides of the junction are now connected to an external circuit, these extra charges will provide a current. The energy of the light will be converted into electrical energy in the junction. The solar cells which generate electrical power for the operation of some of our satellites operate on this principle. In our discussion of the operation of a semiconductor junction we have been assuming that the holes and the electrons act more-or-less independently—except that they somehow get into proper statistical equilibrium. When we were describing the current produced by light shining on the junction, we were assuming that an electron or a hole produced in the junction region would get into the main body of the crystal before being annihilated by a carrier of the opposite polarity. In the immediate vicinity of the junction, where the density of carriers of both signs is approximately equal, the effect of electron-hole annihilation (or as it is often called, “recombination”) is an important effect, and in a detailed analysis of a semiconductor junction must be properly taken into account. We have been assuming that a hole or an electron produced in a junction region has a good chance of getting into the main body of the crystal before recombining. The typical time for an electron or a hole to find an opposite partner and annihilate it is for typical semiconductor materials in the range between $10^{-3}$ and $10^{-7}$ seconds. This time is, incidentally, much longer than the mean free time $\tau$ between collisions with scattering sites in the crystal which we used in the analysis of conductivity. In a typical $p$-$n$ junction, the time for an electron or hole formed in the junction region to be swept away into the body of the crystal is generally much shorter than the recombination time. Most of the pairs will, therefore, contribute to an external current. |
|
3 | 14 | Semiconductors | 5 | Rectification at a semiconductor junction | We would like to show next how it is that a $p$-$n$ junction can act like a rectifier. If we put a voltage across the junction, a large current will flow if the polarity is in one direction, but a very small current will flow if the same voltage is applied in the opposite direction. If an alternating voltage is applied across the junction, a net current will flow in one direction—the current is “rectified.” Let’s look again at what is going on in the equilibrium condition described by the graphs of Fig. 14–9. In the $p$-type material there is a large concentration $N_p$ of positive carriers. These carriers are diffusing around and a certain number of them each second approach the junction. This current of positive carriers which approaches the junction is proportional to $N_p$. Most of them, however, are turned back by the high potential hill at the junction and only the fraction $e^{-qV/\kappa T}$ gets through. There is also a current of positive carriers approaching the junction from the other side. This current is also proportional to the density of positive carriers in the $n$-type region, but the carrier density here is much smaller than the density on the $p$-type side. When the positive carriers approach the junction from the $n$-type side, they find a hill with a negative slope and immediately slide downhill to the p-type side of the junction. Let’s call this current $I_0$. Under equilibrium the currents from the two directions are equal. We expect then the following relation: \begin{equation} \label{Eq:III:14:12} I_0\propto N_p(\text{$n$-side})= N_p(\text{$p$-side})e^{-qV/\kappa T}. \end{equation} You will notice that this equation is really just the same as Eq. (14.10). We have just derived it in a different way. Suppose, however, that we lower the voltage on the $n$-side of the junction by an amount $\Delta V$—which we can do by applying an external potential difference to the junction. Now the difference in potential across the potential hill is no longer $V$ but $V-\Delta V$. The current of positive carriers from the $p$-side to the $n$-side will now have this potential difference in its exponential factor. Calling this current $I_1$, we have \begin{equation*} I_1\propto N_p(\text{$p$-side})e^{-q(V-\Delta V)/\kappa T}. \end{equation*} This current is larger than $I_0$ by just the factor $e^{q\Delta V/\kappa T}$. So we have the following relation between $I_1$ and $I_0$: \begin{equation} \label{Eq:III:14:13} I_1=I_0e^{+q\Delta V/\kappa T}. \end{equation} The current from the $p$-side increases exponentially with the externally applied voltage $\Delta V$. The current of positive carriers from the $n$-side, however, remains constant so long as $\Delta V$ is not too large. When they approach the barrier, these carriers will still find a downhill potential and will all fall down to the $p$-side. (If $\Delta V$ is larger than the natural potential difference $V$, the situation would change, but we will not consider what happens at such high voltages.) The net current $I$ of positive carriers which flows across the junction is then the difference between the currents from the two sides: \begin{equation} \label{Eq:III:14:14} I=I_0(e^{+q\Delta V/\kappa T}-1). \end{equation} The net current $I$ of holes flows into the $n$-type region. There the holes diffuse into the body of the $n$-region, where they are eventually annihilated by the majority $n$-type carriers—the electrons. The electrons which are lost in this annihilation will be made up by a current of electrons from the external terminal of the $n$-type material. When $\Delta V$ is zero, the net current in Eq. (14.14) is zero. For positive $\Delta V$ the current increases rapidly with the applied voltage. For negative $\Delta V$ the current reverses in sign, but the exponential term soon becomes negligible and the negative current never exceeds $I_0$—which under our assumptions is rather small. This back current $I_0$ is limited by the small density of the minority $p$-type carriers on the $n$-side of the junction. If you go through exactly the same analysis for the current of negative carriers which flows across the junction, first with no potential difference and then with a small externally applied potential difference $\Delta V$, you get again an equation just like (14.14) for the net electron current. Since the total current is the sum of the currents contributed by the two carriers, Eq. (14.14) still applies for the total current provided we identify $I_0$ as the maximum current which can flow for a reversed voltage. The voltage-current characteristic of Eq. (14.14) is shown in Fig. 14–10. It shows the typical behavior of solid state diodes—such as those used in modern computers. We should remark that Eq. (14.14) is true only for small voltages. For voltages comparable to or larger than the natural internal voltage difference $V$, other effects come into play and the current no longer obeys the simple equation. You may remember, incidentally, that we got exactly the same equation we have found here in Eq. (14.14) when we discussed the “mechanical rectifier”—the ratchet and pawl—in Chapter 46 of Volume I. We get the same equations in the two situations because the basic physical processes are quite similar. |
|
3 | 14 | Semiconductors | 6 | The transistor | Perhaps the most important application of semiconductors is in the transistor. The transistor consists of two semiconductor junctions very close together. Its operation is based in part on the same principles that we just described for the semiconductor diode—the rectifying junction. Suppose we make a little bar of germanium with three distinct regions, a $p$-type region, an $n$-type region, and another $p$-type region, as shown in Fig. 14–11(a). This combination is called a $p$-$n$-$p$ transistor. Each of the two junctions in the transistor will behave much in the way we have described in the last section. In particular, there will be a potential gradient at each junction having a certain potential drop from the $n$-type region to each $p$-type region. If the two $p$-type regions have the same internal properties, the variation in potential as we go across the crystal will be as shown in the graph of Fig. 14–11(b). Now let’s imagine that we connect each of the three regions to external voltage sources as shown in part (a) of Fig. 14–12. We will refer all voltages to the terminal connected to the left-hand $p$-region so it will be, by definition, at zero potential. We will call this terminal the emitter. The $n$-type region is called the base and it is connected to a slightly negative potential. The right-hand $p$-type region is called the collector, and is connected to a somewhat larger negative potential. Under these circumstances the variation of potential across the crystal will be as shown in the graph of Fig. 14–12(b). Let’s first see what happens to the positive carriers, since it is primarily their behavior which controls the operation of the $p$-$n$-$p$ transistor. Since the emitter is at a relatively more positive potential than the base, a current of positive carriers will flow from the emitter region into the base region. A relatively large current flows, since we have a junction operating with a “forward voltage”—corresponding to the right-hand half of the graph in Fig. 14–10. With these conditions, positive carriers or holes are being “emitted” from the $p$-type region into the $n$-type region. You might think that this current would flow out of the $n$-type region through the base terminal $b$. Now, however, comes the secret of the transistor. The $n$-type region is made very thin—typically $10^{-3}$ cm or less, much narrower than its transverse dimensions. This means that as the holes enter the $n$-type region they have a very good chance of diffusing across to the other junction before they are annihilated by the electrons in the $n$-type region. When they get to the right-hand boundary of the $n$-type region they find a steep downward potential hill and immediately fall into the right-hand $p$-type region. This side of the crystal is called the collector because it “collects” the holes after they have diffused across the $n$-type region. In a typical transistor, all but a fraction of a percent of the hole current which leaves the emitter and enters the base is collected in the collector region, and only the small remainder contributes to the net base current. The sum of the base and collector currents is, of course, equal to the emitter current. Now imagine what happens if we vary slightly the potential $V_b$ on the base terminal. Since we are on a relatively steep part of the curve of Fig. 14–10, a small variation of the potential $V_b$ will cause a rather large change in the emitter current $I_e$. Since the collector voltage $V_c$ is much more negative than the base voltage, these slight variations in potential will not affect appreciably the steep potential hill between the base and the collector. Most of the positive carriers emitted into the $n$-region will still be caught by the collector. Thus as we vary the potential of the base electrode, there will be a corresponding variation in the collector current $I_c$. The essential point, however, is that the base current $I_b$ always remains a small fraction of the collector current. The transistor is an amplifier; a small current $I_b$ introduced into the base electrode gives a large current—$100$ or so times higher—at the collector electrode. What about the electrons—the negative carriers that we have been neglecting so far? First, note that we do not expect any significant electron current to flow between the base and the collector. With a large negative voltage on the collector, the electrons in the base would have to climb a very high potential energy hill and the probability of doing that is very small. There is a very small current of electrons to the collector. On the other hand, the electrons in the base can go into the emitter region. In fact, you might expect the electron current in this direction to be comparable to the hole current from the emitter into the base. Such an electron current isn’t useful, and, on the contrary, is bad because it increases the total base current required for a given current of holes to the collector. The transistor is, therefore, designed to minimize the electron current to the emitter. The electron current is proportional to $N_n(\text{base})$, the density of negative carriers in the base material while the hole current from the emitter depends on $N_p(\text{emitter})$, the density of positive carriers in the emitter region. By using relatively little doping in the $n$-type material $N_n(\text{base})$ can be made much smaller than $N_p(\text{emitter})$. (The very thin base region also helps a great deal because the sweeping out of the holes in this region by the collector increases significantly the average hole current from the emitter into the base, while leaving the electron current unchanged.) The net result is that the electron current across the emitter-base junction can be made much less than the hole current, so that the electrons do not play any significant role in operation of the $p$-$n$-$p$ transistor. The currents are dominated by motion of the holes, and the transistor performs as an amplifier as we have described above. It is also possible to make a transistor by interchanging the $p$-type and $n$-type materials in Fig. 14–11. Then we have what is called an $n$-$p$-$n$ transistor. In the $n$-$p$-$n$ transistor the main currents are carried by the electrons which flow from the emitter into the base and from there to the collector. Obviously, all the arguments we have made for the $p$-$n$-$p$ transistor also apply to the $n$-$p$-$n$ transistor if the potentials of the electrodes are chosen with the opposite signs. |
|
3 | 15 | The Independent Particle Approximation | 1 | Spin waves | In Chapter 13 we worked out the theory for the propagation of an electron or of some other “particle,” such as an atomic excitation, through a crystal lattice. In the last chapter we applied the theory to semiconductors. But when we talked about situations in which there are many electrons we disregarded any interactions between them. To do this is of course only an approximation. In this chapter we will discuss further the idea that you can disregard the interaction between the electrons. We will also use the opportunity to show you some more applications of the theory of the propagation of particles. Since we will generally continue to disregard the interactions between particles, there is very little really new in this chapter except for the new applications. The first example to be considered is, however, one in which it is possible to write down quite exactly the correct equations when there is more than one “particle” present. From them we will be able to see how the approximation of disregarding the interactions is made. We will not, though, analyze the problem very carefully. As our first example we will consider a “spin wave” in a ferromagnetic crystal. We have discussed the theory of ferromagnetism in Chapter 36 of Volume II. At zero temperature all the electron spins that contribute to the magnetism in the body of a ferromagnetic crystal are parallel. There is an interaction energy between the spins, which is lowest when all the spins are down. At any nonzero temperature, however, there is some chance that some of the spins are turned over. We calculated the probability in an approximate manner in Chapter 36. This time we will describe the quantum mechanical theory—so you will see what you would have to do if you wanted to solve the problem more exactly. (We will still make some idealizations by assuming that the electrons are localized at the atoms and that the spins interact only with neighboring spins.) We consider a model in which the electrons at each atom are all paired except one, so that all of the magnetic effects come from one spin-$\tfrac{1}{2}$ electron per atom. Further, we imagine that these electrons are localized at the atomic sites in the lattice. The model corresponds roughly to metallic nickel. We also assume that there is an interaction between any two adjacent spinning electrons which gives a term in the energy of the system \begin{equation} \label{Eq:III:15:1} E=-\sum_{i,j}K\FLPsigma_i\cdot\FLPsigma_j, \end{equation} where $\FLPsigma$’s represent the spins and the summation is over all adjacent pairs of electrons. We have already discussed this kind of interaction energy when we considered the hyperfine splitting of hydrogen due to the interaction of the magnetic moments of the electron and proton in a hydrogen atom. We expressed it then as $A\FLPsigma_{\text{e}}\cdot\FLPsigma_{\text{p}}$. Now, for a given pair, say the electrons at atom $4$ and at atom $5$, the Hamiltonian would be $-K\FLPsigma_4\cdot\FLPsigma_5$. We have a term for each such pair, and the Hamiltonian is (as you would expect for classical energies) the sum of these terms for each interacting pair. The energy is written with the factor $-K$ so that a positive $K$ will correspond to ferromagnetism—that is, the lowest energy results when adjacent spins are parallel. In a real crystal, there may be other terms which are the interactions of next nearest neighbors, and so on, but we don’t need to consider such complications at this stage. With the Hamiltonian of Eq. (15.1) we have a complete description of the ferromagnet—within our approximation—and the properties of the magnetization should come out. We should also be able to calculate the thermodynamic properties due to the magnetization. If we can find all the energy levels, the properties of the crystal at a temperature $T$ can be found from the principle that the probability that a system will be found in a given state of energy $E$ is proportional to $e^{-E/\kappa T}$. This problem has never been completely solved. We will show some of the problems by taking a simple example in which all the atoms are in a line—a one-dimensional lattice. You can easily extend the ideas to three dimensions. At each atomic location there is an electron which has two possible states, either spin up or spin down, and the whole system is described by telling how all of the spins are arranged. We take the Hamiltonian of the system to be the operator of the interaction energy. Interpreting the spin vectors of Eq. (15.1) as the sigma-operators—or the sigma-matrices—we write for the linear lattice \begin{equation} \label{Eq:III:15:2} \Hop=\sum_n-\frac{A}{2}\,\FLPsigmaop_n\cdot\FLPsigmaop_{n+1}. \end{equation} In this equation we have written the constant as $A/2$ for convenience (so that some of the later equations will be exactly the same as the ones in Chapter 13). Now what is the lowest state of this system? The state of lowest energy is the one in which all the spins are parallel—let’s say, all up.1 We can write this state as $\ket{\dotsb+\,+\,+\,+\dotsb}$, or $\ket{\text{gnd}}$ for the “ground,” or lowest, state. It’s easy to figure out the energy for this state. One way is to write out all the vector sigmas in terms of $\sigmaop_x$, $\sigmaop_y$, and $\sigmaop_z$, and work through carefully what each term of the Hamiltonian does to the ground state, and then add the results. We can, however, also use a good short cut. We saw in Section 12–2, that $\FLPsigmaop_i\cdot\FLPsigmaop_j$ could be written in terms of the Pauli spin exchange operator like this: \begin{equation} \label{Eq:III:15:3} \FLPsigmaop_i\cdot\FLPsigmaop_j=(2\Pop^{\text{spin ex}}_{ij}-1), \end{equation} where the operator $\Pop^{\text{spin ex}}_{ij}$ interchanges the spins of the $i$th and $j$th electrons. With this substitution the Hamiltonian becomes \begin{equation} \label{Eq:III:15:4} \Hop=-A\sum_n(\Pop^{\text{spin ex}}_{n,n+1}-\tfrac{1}{2}). \end{equation} It is now easy to work out what happens to different states. For instance if $i$ and $j$ are both up, then exchanging the spins leaves everything unchanged, so $\Pop_{ij}$ acting on the state just gives the same state back, and is equivalent to multiplying by $+1$. The expression $(\Pop_{ij}-\tfrac{1}{2})$ is just equal to one-half. (From now on we will leave off the descriptive superscript on the $\Pop$.) For the ground state all spins are up; so if you exchange a particular pair of spins, you get back the original state. The ground state is a stationary state. If you operate on it with the Hamiltonian you get the same state again multiplied by a sum of terms, $-(A/2)$ for each pair of spins. That is, the energy of the system in the ground state is $-A/2$ per atom. Next we would like to look at the energies of some of the excited states. It will be convenient to measure the energies with respect to the ground state—that is, to choose the ground state as our zero of energy. We can do that by adding the energy $A/2$ to each term in the Hamiltonian. That just changes the “$\tfrac{1}{2}$” in Eq. (15.4) to “$1$.” Our new Hamiltonian is \begin{equation} \label{Eq:III:15:5} \Hop=-A\sum_n(\Pop_{n,n+1}-1). \end{equation} With this Hamiltonian the energy of the lowest state is zero; the spin exchange operator is equivalent to multiplying by unity (for the ground state) which is cancelled by the “$1$” in each term. For describing states other than the ground state we will need a suitable set of base states. One convenient approach is to group the states according to whether one electron has spin down, or two electrons have spin down, and so on. There are, of course, many states with one spin down. The down spin could be at atom “4,” or at atom “5,” or at atom “6,” … We can, in fact, choose just such states for our base states. We could write them this way: $\ket{4}$, $\ket{5}$, $\ket{6}$, … It will, however, be more convenient later if we label the “odd atom”—the one with the down-spinning electron—by its coordinate $x$. That is, we’ll define the state $\ket{x_5}$ to be one with all the electrons spinning up except for the one on the atom at $x_5$, which has a down-spinning electron (see Fig. 15–1). In general, $\ket{x_n}$ is the state with one down spin that is located at the coordinate $x_n$ of the $n$th atom. What is the action of the Hamiltonian (15.5) on the state $\ket{x_5}$? One term of the Hamiltonian is say $-A(\Pop_{7,8}-1)$. The operator $\Pop_{7,8}$ exchanges the two spins of the adjacent atoms $7$, $8$. But in the state $\ket{x_5}$ these are both up, and nothing happens; $\Pop_{7,8}$ is equivalent to multiplying by $1$: \begin{equation*} \Pop_{7,8}\,\ket{x_5}=\ket{x_5}. \end{equation*} It follows that \begin{equation*} (\Pop_{7,8}-1)\,\ket{x_5}=0. \end{equation*} Thus all the terms of the Hamiltonian give zero—except those involving atom $5$, of course. On the state $\ket{x_5}$, the operation $\Pop_{4,5}$ exchanges the spin of atom $4$ (up) and atom $5$ (down). The result is the state with all spins up except the atom at $4$. That is \begin{equation*} \Pop_{4,5}\,\ket{x_5}=\ket{x_4}. \end{equation*} In the same way \begin{equation*} \Pop_{5,6}\,\ket{x_5}=\ket{x_6}. \end{equation*} Hence, the only terms of the Hamiltonian which survive are $-A(\Pop_{4,5}-1)$ and $-A(\Pop_{5,6}-1)$. Acting on $\ket{x_5}$ they produce $-A\,\ket{x_4}+A\,\ket{x_5}$ and $-A\,\ket{x_6}+A\,\ket{x_5}$, respectively. The result is \begin{equation} \label{Eq:III:15:6} \Hop\,\ket{x_5}=-A\sum_n(\Pop_{n,n+1}-1)\,\ket{x_5}= -A\{\ket{x_6}+\ket{x_4}-2\,\ket{x_5}\}. \end{equation}
\begin{align} \Hop\,\ket{x_5}&=-A\sum_n(\Pop_{n,n+1}-1)\,\ket{x_5}\notag\\[.5ex] \label{Eq:III:15:6} &=-A\{\ket{x_6}+\ket{x_4}-2\,\ket{x_5}\}. \end{align}
When the Hamiltonian acts on state $\ket{x_5}$ it gives rise to some amplitude to be in states $\ket{x_4}$ and $\ket{x_6}$. That just means that there is a certain amplitude to have the down spin jump over to the next atom. So because of the interaction between the spins, if we begin with one spin down, then there is some probability that at a later time another one will be down instead. Operating on the general state $\ket{x_n}$, the Hamiltonian gives \begin{equation} \label{Eq:III:15:7} \Hop\,\ket{x_n}=-A\{\ket{x_{n+1}}+\ket{x_{n-1}}-2\,\ket{x_n}\}. \end{equation} Notice particularly that if we take a complete set of states with only one spin down, they will only be mixed among themselves. The Hamiltonian will never mix these states with others that have more spins down. So long as you only exchange spins you never change the total number of down spins. It will be convenient to use the matrix notation for the Hamiltonian, say $H_{n,m}\equiv\bracket{x_n}{\Hop}{x_m}$; Eq. (15.7) is equivalent to \begin{equation} \begin{aligned} H_{n,n}&=2A;\\[1ex] H_{n,n+1}&=H_{n,n-1}=-A;\\[1ex] H_{n,m}&=0,\text{ for }\abs{n-m}>1. \end{aligned} \label{Eq:III:15:8} \end{equation} Now what are the energy levels for states with one spin down? As usual we let $C_n$ be the amplitude that some state $\ket{\psi}$ is in the state $\ket{x_n}$. If $\ket{\psi}$ is to be a definite energy state, all the $C_n$’s must vary with time in the same way, namely, \begin{equation} \label{Eq:III:15:9} C_n=a_ne^{-iEt/\hbar}. \end{equation} We can put this trial solution into our usual Hamiltonian equation \begin{equation} \label{Eq:III:15:10} i\hbar\,\ddt{C_n}{t}=\sum_mH_{nm}C_m, \end{equation} using Eq. (15.8) for the matrix elements. Of course we get an infinite number of equations, but they can all be written as \begin{equation} \label{Eq:III:15:11} Ea_n=2Aa_n-Aa_{n-1}-Aa_{n+1}. \end{equation} We have again exactly the same problem we worked out in Chapter 13, except that where we had $E_0$ we now have $2A$. The solutions correspond to amplitudes $C_n$ (the down-spin amplitude) which propagate along the lattice with a propagation constant $k$ and an energy \begin{equation} \label{Eq:III:15:12} E=2A(1-\cos kb), \end{equation} where $b$ is the lattice constant. The definite energy solutions correspond to “waves” of down spin—called “spin waves.” And for each wavelength there is a corresponding energy. For large wavelengths (small $k$) this energy varies as \begin{equation} \label{Eq:III:15:13} E=Ab^2k^2. \end{equation} Just as before, we can consider a localized wave packet (containing, however, only long wavelengths) which corresponds to a spin-down electron in one part of the lattice. This down spin will behave like a “particle.” Because its energy is related to $k$ by (15.13) the “particle” will have an effective mass: \begin{equation} \label{Eq:III:15:14} m_{\text{eff}}=\frac{\hbar^2}{2Ab^2}. \end{equation} These “particles” are sometimes called “magnons.” |
|
3 | 15 | The Independent Particle Approximation | 2 | Two spin waves | Now we would like to discuss what happens if there are two down spins. Again we pick a set of base states. We’ll choose states in which there are down spins at two atomic locations, such as the state shown in Fig. 15–2. We can label such a state by the $x$-coordinates of the two sites with down spins. The one shown can be called $\ket{x_2,x_5}$. In general the base states are $\ket{x_n,x_m}$—a doubly infinite set! In this system of description, the state $\ket{x_4,x_9}$ and the state $\ket{x_9,x_4}$ are exactly the same state, because each simply says that there is a down spin at $4$ and one at $9$; there is no meaning to the order. Furthermore, the state $\ket{x_4,x_4}$ has no meaning, there isn’t such a thing. We can describe any state $\ket{\psi}$ by giving the amplitudes to be in each of the base states. Thus $C_{m,n}=\braket{x_m,x_n}{\psi}$ now means the amplitude for a system in the state $\ket{\psi}$ to be in a state in which both the $m$th and $n$th atoms have a down spin. The complications which now arise are not complications of ideas—they are merely complexities in bookkeeping. (One of the complexities of quantum mechanics is just the bookkeeping. With more and more down spins, the notation becomes more and more elaborate with lots of indices and the equations always look very horrifying; but the ideas are not necessarily more complicated than in the simplest case.) The equations of motion of the spin system are the differential equations for the $C_{n,m}$. They are \begin{equation} \label{Eq:III:15:15} i\hbar\,\ddt{C_{n,m}}{t}=\sum_{i,j}(H_{nm,ij})C_{ij}. \end{equation} Suppose we want to find the stationary states. As usual, the derivatives with respect to time become $E$ times the amplitudes and the $C_{m,n}$ can be replaced by the coefficients $a_{m,n}$. Next we have to work out carefully the effect of $H$ on a state with spins $m$ and $n$ down. It is not hard to figure out. Suppose for a moment that $m$ and $n$ are far enough apart so that we don’t have to worry about the obvious trouble. The operation of exchange at the location $x_n$ will move the down spin either to the $(n+1)$ or $(n-1)$ atom, and so there’s an amplitude that the present state has come from the state $\ket{x_m,x_{n+1}}$ and also an amplitude that it has come from the state $\ket{x_m,x_{n-1}}$. Or it may have been the other spin that moved; so there’s a certain amplitude that $C_{m,n}$ is fed from $C_{m+1,n}$ or from $C_{m-1,n}$. These effects should all be equal. The final result for the Hamiltonian equation on $C_{m,n}$ is \begin{equation} \label{Eq:III:15:16} Ea_{m,n}=-A(a_{m+1,n}+a_{m-1,n}+a_{m,n+1}+a_{m,n-1})+4Aa_{m,n}. \end{equation}
\begin{gather} \label{Eq:III:15:16} Ea_{m,n}=\\ -A(a_{m+1,n}\!+a_{m-1,n}\!+a_{m,n+1}\!+a_{m,n-1})\!+\!4Aa_{m,n}.\notag \end{gather}
This equation is correct except in two situations. If $m=n$ there is no equation at all, and if $m=n\pm1$, then two of the terms in Eq. (15.16) should be missing. We are going to disregard these exceptions. We simply ignore the fact that some few of these equations are slightly altered. After all, the crystal is supposed to be infinite, and we have an infinite number of terms; neglecting a few might not matter much. So for a first rough approximation let’s forget about the altered equations. In other words, we assume that Eq. (15.16) is true for all $m$ and $n$, even for $m$ and $n$ next to each other. This is the essential part of our approximation. Then the solution is not hard to find. We get immediately \begin{equation} \label{Eq:III:15:17} C_{m,n}=a_{m,n}e^{-iEt/\hbar}, \end{equation} with \begin{equation} \label{Eq:III:15:18} a_{m,n}=(\text{const.})\,e^{ik_1x_m}e^{ik_2x_n}, \end{equation} where \begin{equation} \label{Eq:III:15:19} E=4A-2A\cos k_1b-2A\cos k_2b. \end{equation} Think for a moment what would happen if we had two independent, single spin waves (as in the previous section) corresponding to $k=k_1$ and $k=k_2$; they would have energies, from Eq. (15.12), of \begin{equation*} E_1 =(2A-2A\cos k_1b) \end{equation*} and \begin{equation*} E_2 =(2A-2A\cos k_2b). \end{equation*} Notice that the energy $E$ in Eq. (15.19) is just their sum, \begin{equation} \label{Eq:III:15:20} E=E_1+E_2. \end{equation} In other words we can think of our solution in this way. There are two particles—that is, two spin waves. One of them has a momentum described by $k_1$, the other by $k_2$, and the energy of the system is the sum of the energies of the two objects. The two particles act completely independently. That’s all there is to it. Of course we have made some approximations, but we do not wish to discuss the precision of our answer at this point. However, you might guess that in a reasonable size crystal with billions of atoms—and, therefore, billions of terms in the Hamiltonian—leaving out a few terms wouldn’t make much of an error. If we had so many down spins that there was an appreciable density, then we would certainly have to worry about the corrections. [Interestingly enough, an exact solution can be written down if there are just the two down spins. The result is not particularly important. But it is interesting that the equations can be solved exactly for this case. The solution is: \begin{equation} \label{Eq:III:15:21} a_{m,n}=\exp[ik_c(x_m+x_n)]\sin(k\abs{x_m-x_n}), \end{equation} with the energy \begin{equation} E=4A-2A\cos k_1b-2A\cos k_2b,\notag \end{equation} and with the wave numbers $k_c$ and $k$ related to $k_1$ and $k_2$ by \begin{equation} \label{Eq:III:15:22} k_1=k_c-k,\quad k_2=k_c+k. \end{equation} This solution includes the “interaction” of the two spins. It describes the fact that when the spins come together there is a certain chance of scattering. The spins act very much like particles with an interaction. But the detailed theory of their scattering goes beyond what we want to talk about here.] |
|
3 | 15 | The Independent Particle Approximation | 3 | Independent particles | In the last section we wrote down a Hamiltonian, Eq. (15.15), for a two-particle system. Then using an approximation which is equivalent to neglecting any “interaction” of the two particles, we found the stationary states described by Eqs. (15.17) and (15.18). This state is just the product of two single-particle states. The solution we have given for $a_{m,n}$ in Eq. (15.18) is, however, really not satisfactory. We have very carefully pointed out earlier that the state $\ket{x_9,x_4}$ is not a different state from $\ket{x_4,x_9}$—the order of $x_m$ and $x_n$ has no significance. In general, the algebraic expression for the amplitude $C_{m,n}$ must be unchanged if we interchange the values of $x_m$ and $x_n$, since that doesn’t change the state. Either way, it should represent the amplitude to find a down spin at $x_m$ and a down spin at $x_n$. But notice that (15.18) is not symmetric in $x_m$ and $x_n$—since $k_1$ and $k_2$ can in general be different. The trouble is that we have not forced our solution of Eq. (15.15) to satisfy this additional condition. Fortunately it is easy to fix things up. Notice first that a solution of the Hamiltonian equation just as good as (15.18) is \begin{equation} \label{Eq:III:15:23} a_{m,n}=Ke^{ik_2x_m}e^{ik_1x_n}. \end{equation} It even has the same energy we got for (15.18). Any linear combination of (15.18) and (15.23) is also a good solution, and has an energy still given by Eq. (15.19). The solution we should have chosen—because of our symmetry requirement—is just the sum of (15.18) and (15.23): \begin{equation} \label{Eq:III:15:24} a_{m,n}=K[e^{ik_1x_m}e^{ik_2x_n}+e^{ik_2x_m}e^{ik_1x_n}]. \end{equation} Now, given any $k_1$ and $k_2$ the amplitude $C_{m,n}$ is independent of which way we put $x_m$ and $x_n$—if we should happen to define $x_m$ and $x_n$ reversed we get the same amplitude. Our interpretation of Eq. (15.24) in terms of “magnons” must also be different. We can no longer say that the equation represents one particle with wave number $k_1$ and a second particle with wave number $k_2$. The amplitude (15.24) represents one state with two particles (magnons). The state is characterized by the two wave numbers $k_1$ and $k_2$. Our solution looks like a compound state of one particle with the momentum $p_1=\hbar k_1$ and another particle with the momentum $p_2=\hbar k_2$, but in our state we can’t say which particle is which. By now, this discussion should remind you of Chapter 4 and our story of identical particles. We have just been showing that the particles of the spin waves—the magnons—behave like identical Bose particles. All amplitudes must be symmetric in the coordinates of the two particles—which is the same as saying that if we “interchange the two particles,” we get back the same amplitude and with the same sign. But, you may be thinking, why did we choose to add the two terms in making Eq. (15.24). Why not subtract? With a minus sign, interchanging $x_m$ and $x_n$ would just change the sign of $a_{m,n}$ which doesn’t matter. But interchanging $x_m$ and $x_n$ doesn’t change anything—all the electrons of the crystal are exactly where they were before, so there is no reason for even the sign of the amplitude to change. The magnons will behave like Bose particles.2 The main points of this discussion have been twofold: First, to show you something about spin waves, and, second, to demonstrate a state whose amplitude is a product of two amplitudes, and whose energy is the sum of the energies corresponding to the two amplitudes. For independent particles the amplitude is the product and the energy is the sum. You can easily see why the energy is the sum. The energy is the coefficient of $t$ in an imaginary exponential—it is proportional to the frequency. If two objects are doing something, one of them with the amplitude $e^{-iE_1t/\hbar}$ and the other with the amplitude $e^{-iE_2t/\hbar}$, and if the amplitude for the two things to happen together is the product of the amplitudes for each, then there is a single frequency in the product which is the sum of the two frequencies. The energy corresponding to the amplitude product is the sum of the two energies. We have gone through a rather long-winded argument to tell you a simple thing. When you don’t take into account any interaction between particles, you can think of each particle independently. They can individually exist in the various different states they would have alone, and they will each contribute the energy they would have had if they were alone. However, you must remember that if they are identical particles, they may behave either as Bose or as Fermi particles depending upon the problem. Two extra electrons added to a crystal, for instance, would have to behave like Fermi particles. When the positions of two electrons are interchanged, the amplitude must reverse sign. In the equation corresponding to Eq. (15.24) there would have to be a minus sign between the two terms on the right. As a consequence, two Fermi particles cannot be in exactly the same condition—with equal spins and equal $k$’s. The amplitude for this state is zero. |
|
3 | 15 | The Independent Particle Approximation | 4 | The benzene molecule | Although quantum mechanics provides the basic laws that determine the structures of molecules, these laws can be applied exactly only to the most simple compounds. The chemists have, therefore, worked out various approximate methods for calculating some of the properties of complicated molecules. We would now like to show you how the independent particle approximation is used by the organic chemists. We begin with the benzene molecule. We discussed the benzene molecule from another point of view in Chapter 10. There we took an approximate picture of the molecule as a two-state system, with the two base states shown in Fig. 15–3. There is a ring of six carbons with a hydrogen bonded to the carbon at each location. With the conventional picture of valence bonds it is necessary to assume double bonds between half of the carbon atoms, and in the lowest energy condition there are the two possibilities shown in the figure. There are also other, higher-energy states. When we discussed benzene in Chapter 10, we just took the two states and forgot all the rest. We found that the ground-state energy of the molecule was not the energy of one of the states in the figure, but was lower than that by an amount proportional to the amplitude to flip from one of these states to the other. Now we’re going to look at the same molecule from a completely different point of view—using a different kind of approximation. The two points of view will give us different answers, but if we improve either approximation it should lead to the truth, a valid description of benzene. However, if we don’t bother to improve them, which is of course the usual situation, then you should not be surprised if the two descriptions do not agree exactly. We shall at least show that also with the new point-of-view the lowest energy of the benzene molecule is lower than either of the three-bond structures of Fig. 15–3. Now we want to use the following picture. Suppose we imagine the six carbon atoms of a benzene molecule connected only by single bonds as in Fig. 15–4. We have removed six electrons—since a bond stands for a pair of electrons—so we have a six-times ionized benzene molecule. Now we will consider what happens when we put back the six electrons one at a time, imagining that each one can run freely around the ring. We assume also that all the bonds shown in Fig. 15–4 are satisfied, and don’t need to be considered further. What happens when we put one electron back into the molecular ion? It might, of course, be located in any one of the six positions around the ring—corresponding to six base states. It would also have a certain amplitude, say $A$, to go from one position to the next. If we analyze the stationary states, there would be certain possible energy levels. That’s only for one electron. Next put a second electron in. And now we make the most ridiculous approximation that you can think of—that what one electron does is not affected by what the other is doing. Of course they really will interact; they repel each other through the Coulomb force, and furthermore when they are both at the same site, they must have considerably different energy than twice the energy for one being there. Certainly the approximation of independent particles is not legitimate when there are only six sites—particularly when we want to put in six electrons. Nevertheless the organic chemists have been able to learn a lot by making this kind of an approximation. Before we work out the benzene molecule in detail, let’s consider a simpler example—the ethylene molecule which contains just two carbon atoms with two hydrogen atoms on either side as shown in Fig. 15–5. This molecule has one “extra” bond involving two electrons between the two carbon atoms. Now remove one of these electrons; what do we have? We can look at it as a two-state system—the remaining electron can be at one carbon or the other. We can analyze it as a two-state system. The possible energies for the single electron are either $(E_0-A)$ or $(E_0+A)$, as shown in Fig. 15–6. Now add the second electron. Good, if we have two electrons, we can put the first one in the lower state and the second one in the upper. Not quite; we forgot something. Each one of the states is really double. When we say there’s a possible state with the energy $(E_0-A)$, there are really two. Two electrons can go into the same state if one has its spin up and the other, its spin down. (No more can be put in because of the exclusion principle.) So there really are two possible states of energy $(E_0-A)$. We can draw a diagram, as in Fig. 15–7, which indicates both the energy levels and their occupancy. In the condition of lowest energy both electrons will be in the lowest state with their spins opposite. The energy of the extra bond in the ethylene molecule therefore is $2(E_0-A)$ if we neglect the interaction between the two electrons. Let’s go back to the benzene. Each of the two states of Fig. 15–3 has three double bonds. Each of these is just like the bond in ethylene, and contributes $2(E_0-A)$ to the energy if $E_0$ is now the energy to put an electron on a site in benzene and $A$ is the amplitude to flip to the next site. So the energy should be roughly $6(E_0-A)$. But when we studied benzene before, we got that the energy was lower than the energy of the structure with three extra bonds. Let’s see if the energy for benzene comes out lower than three bonds from our new point of view. We start with the six-times ionized benzene ring and add one electron. Now we have a six-state system. We haven’t solved such a system yet, but we know what to do. We can write six equations in the six amplitudes, and so on. But let’s save some work—by noticing that we’ve already solved the problem, when we worked out the problem of an electron on an infinite line of atoms. Of course, the benzene is not an infinite line, it has $6$ atomic sites in a circle. But imagine that we open out the circle to a line, and number the atoms along the line from $1$ to $6$. In an infinite line the next location would be $7$, but if we insist that this location be identical with number $1$ and so on, the situation will be just like the benzene ring. In other words we can take the solution for an infinite line with an added requirement that the solution must be periodic with a cycle six atoms long. From Chapter 13 the electron on a line has states of definite energy when the amplitude at each site is $e^{ikx_n}=e^{ikbn}$. For each $k$ the energy is \begin{equation} \label{Eq:III:15:25} E=E_0-2A\cos kb. \end{equation} We want to use now only those solutions which repeat every $6$ atoms. Let’s do first the general case for a ring of $N$ atoms. If the solution is to have a period of $N$ atomic spacing, $e^{ikbN}$ must be unity; or $kbN$ must be a multiple of $2\pi$. Taking $s$ to represent any integer, our condition is that \begin{equation} \label{Eq:III:15:26} kbN=2\pi s. \end{equation} We have seen before that there is no meaning to taking $k$’s outside the range $\pm\pi/b$. This means that we get all possible states by taking values of $s$ in the range $\pm N/2$. We find then that for an $N$-atom ring there are $N$ definite energy states3 and they have wave numbers $k_s$ given by \begin{equation} \label{Eq:III:15:27} k_s=\frac{2\pi}{Nb}\,s. \end{equation} Each state has the energy (15.25). We have a line spectrum of possible energy levels. The spectrum for benzene ($N=6$) is shown in Fig. 15–8(b). (The numbers in parentheses indicate the number of different states with the same energy.) There’s a nice way to visualize the six energy levels, as we have shown in part (a) of the figure. Imagine a circle centered on a level with $E_0$, and with a radius of $2A$. If we start at the bottom and mark off six equal arcs (at angles from the bottom point of $k_sb=2\pi s/N$, or $2\pi s/6$ for benzene), then the vertical heights of the points on the circle are the solutions of Eq. (15.25). The six points represent the six possible states. The lowest energy level is at $(E_0-2A)$; there are two states with the same energy $(E_0-A)$, and so on.4 These are possible states for one electron. If we have more than one electron, two—with opposite spins—can go into each state. For the benzene molecule we have to put in six electrons. For the ground state they will go into the lowest possible energy states—two at $s=0$, two at $s=+1$, and two at $s=-1$. According to the independent particle approximation the energy of the ground state is \begin{align} E_{\text{ground}} &=2(E_0-2A)+4(E_0-A)\notag\\[1ex] \label{Eq:III:15:28} &=6E_0-8A. \end{align} The energy is indeed less than that of three separate double bonds—by the amount $2A$. By comparing the energy of benzene to the energy of ethylene it is possible to determine $A$. It comes out to be $0.8$ electron volt, or, in the units the chemists like, $18$ kilocalories per mole. We can use this description to calculate or understand other properties of benzene. For example, using Fig. 15–8 we can discuss the excitation of benzene by light. What would happen if we tried to excite one of the electrons? It could move up to one of the empty higher states. The lowest energy of excitation would be a transition from the highest filled level to the lowest empty level. That takes the energy $2A$. Benzene will absorb light of frequency $\nu$ when $h\nu=2A$. There will also be absorption of photons with the energies $3A$ and $4A$. Needless to say, the absorption spectrum of benzene has been measured and the pattern of spectral lines is more or less correct except that the lowest transition occurs in the ultraviolet; and to fit the data one would have to choose a value of $A$ between $1.4$ and $2.4$ electron volts. That is, the numerical value of $A$ is two or three times larger than is predicted from the chemical binding energy. What the chemist does in situations like this is to analyze many molecules of a similar kind and get some empirical rules. He learns, for example: For calculating binding energy use such and such a value of $A$, but for getting the absorption spectrum approximately right use another value of $A$. You may feel that this sounds a little absurd. It is not very satisfactory from the point of view of a physicist who is trying to understand nature from first principles. But the problem of the chemist is different. He must try to guess ahead of time what is going to happen with molecules that haven’t been made yet, or which aren’t understood completely. What he needs is a series of empirical rules; it doesn’t make much difference where they come from. So he uses the theory in quite a different way than the physicist. He takes equations that have some shadow of the truth in them, but then he must alter the constants in them—making empirical corrections. In the case of benzene, the principal reason for the inconsistency is our assumption that the electrons are independent—the theory we started with is really not legitimate. Nevertheless, it has some shadow of the truth because its results seem to be going in the right direction. With such equations plus some empirical rules—including various exceptions—the organic chemist makes his way through the morass of complicated things he chooses to study. (Don’t forget that the reason a physicist can really calculate from first principles is that he chooses only simple problems. He never solves a problem with $42$ or even $6$ electrons in it. So far, he has been able to calculate reasonably accurately only the hydrogen atom and the helium atom.) |
|
3 | 15 | The Independent Particle Approximation | 5 | More organic chemistry | Let’s see how the same ideas can be used to study other molecules. Consider a molecule like butadiene $(1,3)$—it is drawn in Fig. 15–9 according to the usual valence bond picture. We can play the same game with the extra four electrons corresponding to the two double bonds. If we remove four electrons, we have four carbon atoms in a line. You already know how to solve a line. You say, “Oh no, I only know how to solve an infinite line.” But the solutions for the infinite line also include the ones for a finite line. Watch. Let $N$ be the number of atoms on the line and number them from $1$ to $N$ as shown in Fig. 15–10. In writing the equations for the amplitude at position $1$ you would not have a term feeding from position $0$. Similarly, the equation for position $N$ would differ from the one that we used for an infinite line because there would be nothing feeding from position $N+1$. But suppose that we can obtain a solution for the infinite line which has the following property: the amplitude to be at atom $0$ is zero and the amplitude to be at atom $(N+1)$ is also zero. Then the set of equations for all the locations from $1$ to $N$ on the finite line are also satisfied. You might think no such solution exists for the infinite line because our solutions all looked like $e^{ikx_n}$ which has the same absolute value of the amplitude everywhere. But you will remember that the energy depends only on the absolute value of $k$, so that another solution, which is equally legitimate for the same energy, would be $e^{-ikx_n}$. And the same is true of any superposition of these two solutions. By subtracting them we can get the solution $\sin kx_n$, which satisfies the requirement that the amplitude be zero at $x=0$. It still corresponds to the energy $(E_0-2A\cos kb)$. Now by a suitable choice for the value of $k$ we can also make the amplitude zero at $x_{N+1}$. This requires that $(N+1)kb$ be a multiple of $\pi$, or that \begin{equation} \label{Eq:III:15:29} kb=\frac{\pi}{(N+1)}\,s, \end{equation} where $s$ is an integer from $1$ to $N$. (We take only positive $k$’s because each solution contains $+k$ and $-k$; changing the sign of $k$ gives the same state all over again.) For the butadiene molecule, $N=4$, so there are four states with \begin{equation} \label{Eq:III:15:30} kb=\pi/5,\quad 2\pi/5,\quad 3\pi/5,\quad \text{and}\quad 4\pi/5. \end{equation} We can represent the energy levels using a circle diagram similar to the one for benzene. This time we use a semicircle divided into five equal parts as shown in Fig. 15–11. The point at the bottom corresponds to $s=0$, which gives no state at all. The same is true of the point at the top, which corresponds to $s=N+1$. The remaining $4$ points give us four allowed energies. There are four stationary states, which is what we expect having started with four base states. In the circle diagram, the angular intervals are $\pi/5$ or $36$ degrees. The lowest energy comes out $(E_0-1.618A)$. (Ah, what wonders mathematics holds; the golden mean of the Greeks5 gives us the lowest energy state of the butadiene molecule according to this theory!) Now we can calculate the energy of the butadiene molecule when we put in four electrons. With four electrons, we fill up the lowest two levels, each with two electrons of opposite spin. The total energy is \begin{equation} \label{Eq:III:15:31} E=2(E_0-1.618A)+2(E_0-0.618A)= 4(E_0-A)-0.472A. \end{equation}
\begin{align} E&=2(E_0-1.618A)+2(E_0-0.618A)\notag\\[1ex] \label{Eq:III:15:31} &=4(E_0-A)-0.472A. \end{align} This result seems reasonable. The energy is a little lower than for two simple double bonds, but the binding is not so strong as in benzene. Anyway this is the way the chemist analyzes some organic molecules. The chemist can use not only the energies but the probability amplitudes as well. Knowing the amplitudes for each state, and which states are occupied, he can tell the probability of finding an electron anywhere in the molecule. Those places where the electrons are more likely to be are apt to be reactive in chemical substitutions which require that an electron be shared with some other group of atoms. The other sites are more likely to be reactive in those substitutions which have a tendency to yield an extra electron to the system. The same ideas we have been using can give us some understanding of a molecule even as complicated as chlorophyll, one version of which is shown in Fig. 15–12. Notice that the double and single bonds we have drawn with heavy lines form a long closed ring with twenty intervals. The extra electrons of the double bonds can run around this ring. Using the independent particle method we can get a whole set of energy levels. There are strong absorption lines from transitions between these levels which lie in the visible part of the spectrum, and give this molecule its strong color. Similar complicated molecules such as the xanthophylls, which make leaves turn red, can be studied in the same way. There is one more idea which emerges from the application of this kind of theory in organic chemistry. It is probably the most successful or, at least in a certain sense, the most accurate. This idea has to do with the question: In what situations does one get a particularly strong chemical binding? The answer is very interesting. Take the example, first, of benzene, and imagine the sequence of events that occurs as we start with the six-times ionized molecule and add more and more electrons. We would then be thinking of various benzene ions—negative or positive. Suppose we plot the energy of the ion (or neutral molecule) as a function of the number of electrons. If we take $E_0=0$ (since we don’t know what it is), we get the curve shown in Fig. 15–13. For the first two electrons the slope of the function is a straight line. For each successive group the slope increases, and there is a discontinuity in slope between the groups of electrons. The slope changes when one has just finished filling a set of levels which all have the same energy and must move up to the next higher set of levels for the next electron. The actual energy of the benzene ion is really quite different from the curve of Fig. 15–13 because of the interactions of the electrons and because of electrostatic energies we have been neglecting. These corrections will, however, vary with $n$ in a rather smooth way. Even if we were to make all these corrections, the resulting energy curve would still have kinks at those values of $n$ which just fill up a particular energy level. Now consider a very smooth curve that fits the points on the average like the one drawn in Fig. 15–14. We can say that the points above this curve have “higher-than-normal” energies, and the points below the curve have “lower-than-normal” energies. We would, in general, expect that those configurations with a lower-than-normal energy would have an above average stability—chemically speaking. Notice that the configurations farther below the curve always occur at the end of one of the straight line segments—namely when there are enough electrons to fill up an “energy shell,” as it is called. This is the very accurate prediction of the theory. Molecules—or ions—are particularly stable (in comparison with other similar configurations) when the available electrons just fill up an energy shell. This theory has explained and predicted some very peculiar chemical facts. To take a very simple example, consider a ring of three. It’s almost unbelievable that the chemist can make a ring of three and have it stable, but it has been done. The energy circle for three electrons is shown in Fig. 15–15. Now if you put two electrons in the lower state, you have only two of the three electrons that you require. The third electron must be put in at a much higher level. By our argument this molecule should not be particularly stable, whereas the two-electron structure should be stable. It does turn out, in fact, that the neutral molecule of triphenyl cyclopropenyl is very hard to make, but that the positive ion shown in Fig. 15–16 is relatively easy to make. The ring of three is never really easy because there is always a large stress when the bonds in an organic molecule make an equilateral triangle. To make a stable compound at all, the structure must be stabilized in some way. Anyway if you add three benzene rings on the corners, the positive ion can be made. (The reason for this requirement of added benzene rings is not really understood.) In a similar way the five-sided ring can also be analyzed. If you draw the energy diagram, you can see in a qualitative way that the six-electron structure should be an especially stable structure, so that such a molecule should be most stable as a negative ion. Now the five-ring is well known and easy to make and always acts as a negative ion. Similarly, you can easily verify that a ring of $4$ or $8$ is not very interesting, but that a ring of $14$ or $10$—like a ring of $6$—should be especially stable as a neutral object. |
|
3 | 15 | The Independent Particle Approximation | 6 | Other uses of the approximation | There are two other similar situations which we will describe only briefly. In considering the structure of an atom, we can consider that the electrons fill successive shells. The Schrödinger theory of electron motion can be worked out easily only for a single electron moving in a “central” field—one which varies only with the distance from a point. How can we then understand what goes on in an atom which has $22$ electrons?! One way is to use a kind of independent particle approximation. First you calculate what happens with one electron. You get a number of energy levels. You put an electron into the lowest energy state. You can, for a rough model, continue to ignore the electron interactions and go on filling successive shells, but there is a way to get better answers by taking into account—in an approximate way at least—the effect of the electric charge carried by the electron. Each time you add an electron you compute its amplitude to be at various places, and then use this amplitude to estimate a kind of spherically symmetric charge distribution. You use the field of this distribution—together with the field of the positive nucleus and all the previous electrons—to calculate the states available for the next electron. In this way you can get reasonably correct estimates for the energies for the neutral atom and for various ionized states. You find that there are energy shells, just as we saw for the electrons in a ring molecule. With a partially filled shell, the atom will show a preference for taking on one or more extra electrons, or for losing some electrons so as to get into the most stable state of a filled shell. This theory explains the machinery behind the fundamental chemical properties which show up in the periodic table of the elements. The inert gases are those elements in which a shell has just been completed, and it is especially difficult to make them react. (Some of them do react of course—with fluorine and oxygen, for example; but such compounds are very weakly bound; the so-called inert gases are nearly inert.) An atom which has one electron more or one electron less than an inert gas will easily lose or gain an electron to get into the especially stable (low-energy) condition which comes from having a completely filled shell—they are the very active chemical elements of valence $+1$ or $-1$. The other situation is found in nuclear physics. In atomic nuclei the protons and neutrons interact with each other quite strongly. Even so, the independent particle model can again be used to analyze nuclear structure. It was first discovered experimentally that nuclei were especially stable if they contained certain particular numbers of neutrons—namely $2$, $8$, $20$, $28$, $50$, $82$. Nuclei containing protons in these numbers are also especially stable. Since there was initially no explanation for these numbers they were called the “magic numbers” of nuclear physics. It is well known that neutrons and protons interact strongly with each other; people were, therefore, quite surprised when it was discovered that an independent particle model predicted a shell structure which came out with the first few magic numbers. The model assumed that each nucleon (proton or neutron) moved in a central potential which was created by the average effects of all the other nucleons. This model failed, however, to give the correct values for the higher magic numbers. Then it was discovered by Maria Mayer, and independently by Jensen and his collaborators, that by taking the independent particle model and adding only a correction for what is called the “spin-orbit interaction,” one could make an improved model which gave all of the magic numbers. (The spin-orbit interaction causes the energy of a nucleon to be lower if its spin has the same direction as its orbital angular momentum from motion in the nucleus.) The theory gives even more—its picture of the so-called “shell structure” of the nuclei enables us to predict certain characteristics of nuclei and of nuclear reactions. The independent particle approximation has been found useful in a wide range of subjects—from solid-state physics, to chemistry, to biology, to nuclear physics. It is often only a crude approximation, but is able to give an understanding of why there are especially stable conditions—in shells. Since it omits all of the complexity of the interactions between the individual particles, we should not be surprised that it often fails completely to give correctly many important details. |
|
3 | 16 | The Dependence of Amplitudes on Position | 1 | Amplitudes on a line | We are now going to discuss how the probability amplitudes of quantum mechanics vary in space. In some of the earlier chapters you may have had a rather uncomfortable feeling that some things were being left out. For example, when we were talking about the ammonia molecule, we chose to describe it in terms of two base states. For one base state we picked the situation in which the nitrogen atom was “above” the plane of the three hydrogen atoms, and for the other base state we picked the condition in which the nitrogen atom was “below” the plane of the three hydrogen atoms. Why did we pick just these two states? Why is it not possible that the nitrogen atom could be at $2$ angstroms above the plane of the three hydrogen atoms, or at $3$ angstroms, or at $4$ angstroms above the plane? Certainly, there are many positions that the nitrogen atom could occupy. Again when we talked about the hydrogen molecular ion, in which there is one electron shared by two protons, we imagined two base states: one for the electron in the neighborhood of proton number one, and the other for the electron in the neighborhood of proton number two. Clearly we were leaving out many details. The electron is not exactly at proton number two but is only in the neighborhood. It could be somewhere above the proton, somewhere below the proton, somewhere to the left of the proton, or somewhere to the right of the proton. We intentionally avoided discussing these details. We said that we were interested in only certain features of the problem, so we were imagining that when the electron was in the vicinity of proton number one, it would take up a certain rather definite condition. In that condition the probability to find the electron would have some rather definite distribution around the proton, but we were not interested in the details. We can also put it another way. In our discussion of a hydrogen molecular ion we chose an approximate description when we described the situation in terms of two base states. In reality there are lots and lots of these states. An electron can take up a condition around a proton in its lowest, or ground, state, but there are also many excited states. For each excited state the distribution of the electron around the proton is different. We ignored these excited states, saying that we were interested in only the conditions of low energy. But it is just these other excited states which give the possibility of various distributions of the electron around the proton. If we want to describe in detail the hydrogen molecular ion, we have to take into account also these other possible base states. We could do this in several ways, and one way is to consider in greater detail states in which the location of the electron in space is more carefully described. We are now ready to consider a more elaborate procedure which will allow us to talk in detail about the position of the electron, by giving a probability amplitude to find the electron anywhere and everywhere in a given situation. This more complete theory provides the underpinning for the approximations we have been making in our earlier discussions. In a sense, our early equations can be derived as a kind of approximation to the more complete theory. You may be wondering why we did not begin with the more complete theory and make the approximations as we went along. We have felt that it would be much easier for you to gain an understanding of the basic machinery of quantum mechanics by beginning with the two-state approximations and working gradually up to the more complete theory than to approach the subject the other way around. For this reason our approach to the subject appears to be in the reverse order to the one you will find in many books. As we go into the subject of this chapter you will notice that we are breaking a rule we have always followed in the past. Whenever we have taken up any subject we have always tried to give a more or less complete description of the physics—showing you as much as we could about where the ideas led to. We have tried to describe the general consequences of a theory as well as describing some specific detail so that you could see where the theory would lead. We are now going to break that rule; we are going to describe how one can talk about probability amplitudes in space and show you the differential equations which they satisfy. We will not have time to go on and discuss many of the obvious implications which come out of the theory. Indeed we will not even be able to get far enough to relate this theory to some of the approximate formulations we have used earlier—for example, to the hydrogen molecule or to the ammonia molecule. For once, we must leave our business unfinished and open-ended. We are approaching the end of our course, and we must satisfy ourselves with trying to give you an introduction to the general ideas and with indicating the connections between what we have been describing and some of the other ways of approaching the subject of quantum mechanics. We hope to give you enough of an idea that you can go off by yourself and by reading books learn about many of the implications of the equations we are going to describe. We must, after all, leave something for the future. Let’s review once more what we have found out about how an electron can move along a line of atoms. When an electron has an amplitude to jump from one atom to the next, there are definite energy states in which the probability amplitude for finding the electron is distributed along the lattice in the form of a traveling wave. For long wavelengths—for small values of the wave number $k$—the energy of the state is proportional to the square of the wave number. For a crystal lattice with the spacing $b$, in which the amplitude per unit time for the electron to jump from one atom to the next is $iA/\hbar$, the energy of the state is related to $k$ (for small $kb$) by \begin{equation} \label{Eq:III:16:1} E=Ak^2b^2 \end{equation} (see Section 13–2). We also saw that groups of such waves with similar energies would make up a wave packet which would behave like a classical particle with a mass $m_{\text{eff}}$ given by: \begin{equation} \label{Eq:III:16:2} m_{\text{eff}}=\frac{\hbar^2}{2Ab^2}. \end{equation} Since waves of probability amplitude in a crystal behave like a particle, one might well expect that the general quantum mechanical description of a particle would show the same kind of wave behavior we observed for the lattice. Suppose we were to think of a lattice on a line and imagine that the lattice spacing $b$ were to be made smaller and smaller. In the limit we would be thinking of a case in which the electron could be anywhere along the line. We would have gone over to a continuous distribution of probability amplitudes. We would have the amplitude to find an electron anywhere along the line. This would be one way to describe the motion of an electron in a vacuum. In other words, if we imagine that space can be labeled by an infinity of points all very close together and we can work out the equations that relate the amplitudes at one point to the amplitudes at neighboring points, we will have the quantum mechanical laws of motion of an electron in space. Let’s begin by recalling some of the general principles of quantum mechanics. Suppose we have a particle which can exist in various conditions in a quantum mechanical system. Any particular condition an electron can be found in, we call a “state,” which we label with a state vector, say $\ket{\phi}$. Some other condition would be labeled with another state vector, say $\ket{\psi}$. We then introduce the idea of base states. We say that there is a set of states $\ket{1}$, $\ket{2}$, $\ket{3}$, $\ket{4}$, and so on, which have the following properties. First, all of these states are quite distinct—we say they are orthogonal. By this we mean that for any two of the base states $\ket{i}$ and $\ket{j}$ the amplitude $\braket{i}{j}$ that an electron known to be in the state $\ket{i}$ is also in the state $\ket{j}$ is equal to zero—unless, of course, $\ket{i}$ and $\ket{j}$ stand for the same state. We represent this symbolically by \begin{equation} \label{Eq:III:16:3} \braket{i}{j}=\delta_{ij}. \end{equation} You will remember that $\delta_{ij}=0$ if $i$ and $j$ are different, and $\delta_{ij}=1$ if $i$ and $j$ are the same number. Second, the base states $\ket{i}$ must be a complete set, so that any state at all can be described in terms of them. That is, any state $\ket{\phi}$ at all can be described completely by giving all of the amplitudes $\braket{i}{\phi}$ that a particle in the state $\ket{\phi}$ will also be found in the state $\ket{i}$. In fact, the state vector $\ket{\phi}$ is equal to the sum of the base states each multiplied by a coefficient which is the amplitude that the state $\ket{\phi}$ is also in the state $\ket{i}$: \begin{equation} \label{Eq:III:16:4} \ket{\phi}=\sum_i\ket{i}\braket{i}{\phi}. \end{equation} Finally, if we consider any two states $\ket{\phi}$ and $\ket{\psi}$, the amplitude that the state $\ket{\psi}$ will also be in the state $\ket{\phi}$ can be found by first projecting the state $\ket{\psi}$ into the base states and then projecting from each base state into the state $\ket{\phi}$. We write that in the following way: \begin{equation} \label{Eq:III:16:5} \braket{\phi}{\psi}=\sum_i\braket{\phi}{i}\braket{i}{\psi}. \end{equation} The summation is, of course, to be carried out over the whole set of base states $\ket{i}$. In Chapter 13 when we were working out what happens with an electron placed on a linear array of atoms, we chose a set of base states in which the electron was localized at one or other of the atoms in the line. The base state $\ket{n}$ represented the condition in which the electron was localized at atom number “$n$.” (There is, of course, no significance to the fact that we called our base states $\ket{n}$ instead of $\ket{i}$.) A little later, we found it convenient to label the base states by the coordinate $x_n$ of the atom rather than by the number of the atom in the array. The state $\ket{x_n}$ is just another way of writing the state $\ket{n}$. Then, following the general rules, any state at all, say $\ket{\psi}$ is described by giving the amplitudes that an electron in the state $\ket{\psi}$ is also in one of the states $\ket{x_n}$. For convenience we have chosen to let the symbol $C_n$ stand for these amplitudes, \begin{equation} \label{Eq:III:16:6} C_n=\braket{x_n}{\psi}. \end{equation} Since the base states are associated with a location along the line, we can think of the amplitude $C_n$ as a function of the coordinate $x$ and write it as $C(x_n)$. The amplitudes $C(x_n)$ will, in general, vary with time and are, therefore, also functions of $t$. We will not generally bother to show explicitly this dependence. In Chapter 13 we then proposed that the amplitudes $C(x_n)$ should vary with time in a way described by the Hamiltonian equation (Eq. 13.3). In our new notation this equation is \begin{equation} \label{Eq:III:16:7} i\hbar\ddp{C(x_n)}{t}\!=\!E_0C(x_n)\!-\! AC(x_n\!+\!b)\!-\!AC(x_n\!-\!b). \end{equation} The last two terms on the right-hand side represent the process in which an electron at atom $(n+1)$ or at atom $(n-1)$ can feed into atom $n$. We found that Eq. (16.7) has solutions corresponding to definite energy states, which we wrote as \begin{equation} \label{Eq:III:16:8} C(x_n)=e^{-iEt/\hbar}e^{ikx_n}. \end{equation} For the low-energy states the wavelengths are large ($k$ is small), and the energy is related to $k$ by \begin{equation} \label{Eq:III:16:9} E=(E_0-2A)+Ak^2b^2, \end{equation} or, choosing our zero of energy so that $(E_0-2A)=0$, the energy is given by Eq. (16.1). Let’s see what might happen if we were to let the lattice spacing $b$ go to zero, keeping the wave number $k$ fixed. If that is all that were to happen the last term in Eq. (16.9) would just go to zero and there would be no physics. But suppose $A$ and $b$ are varied together so that as $b$ goes to zero the product $Ab^2$ is kept constant1—using Eq. (16.2) we will write $Ab^2$ as the constant $\hbar^2/2m_{\text{eff}}$. Under these circumstances, Eq. (16.9) would be unchanged, but what would happen to the differential equation (16.7)? First we will rewrite Eq. (16.7) as \begin{equation} \label{Eq:III:16:10} i\hbar\,\ddp{C(x_n)}{t}=(E_0-2A)C(x_n)+A[2C(x_n)- C(x_n+b)-C(x_n-b)]. \end{equation}
\begin{align} \label{Eq:III:16:10} i\hbar\ddp{C(x_n)}{t}&=(E_0-2A)C(x_n)\,+\\[.5ex] &A[2C(x_n)\!-\!C(x_n\!+b)\!-\!C(x_n\!-b)].\notag \end{align} For our choice of $E_0$, the first term drops out. Next, we can think of a continuous function $C(x)$ that goes smoothly through the proper values $C(x_n)$ at each $x_n$. As the spacing $b$ goes to zero, the points $x_n$ get closer and closer together, and (if we keep the variation of $C(x)$ fairly smooth) the quantity in the brackets is just proportional to the second derivative of $C(x)$. We can write—as you can see by making a Taylor expansion of each term—the equality \begin{equation} \label{Eq:III:16:11} 2C(x)-C(x+b)-C(x-b)\approx-b^2\, \frac{\partial^2C(x)}{\partial x^2}. \end{equation} In the limit, then, as $b$ goes to zero, keeping $b^2A$ equal to $\hbar^2/2m_{\text{eff}}$, Eq. (16.7) goes over into \begin{equation} \label{Eq:III:16:12} i\hbar\,\ddp{C(x)}{t}=-\frac{\hbar^2}{2m_{\text{eff}}}\, \frac{\partial^2C(x)}{\partial x^2}. \end{equation} We have an equation which says that the time rate of change of $C(x)$—the amplitude to find the electron at $x$—depends on the amplitude to find the electron at nearby points in a way which is proportional to the second derivative of the amplitude with respect to position. The correct quantum mechanical equation for the motion of an electron in free space was first discovered by Schrödinger. For motion along a line it has exactly the form of Eq. (16.12) if we replace $m_{\text{eff}}$ by $m$, the free-space mass of the electron. For motion along a line in free space the Schrödinger equation is \begin{equation} \label{Eq:III:16:13} i\hbar\,\ddp{C(x)}{t}=-\frac{\hbar^2}{2m}\, \frac{\partial^2C(x)}{\partial x^2}. \end{equation} We do not intend to have you think we have derived the Schrödinger equation but only wish to show you one way of thinking about it. When Schrödinger first wrote it down, he gave a kind of derivation based on some heuristic arguments and some brilliant intuitive guesses. Some of the arguments he used were even false, but that does not matter; the only important thing is that the ultimate equation gives a correct description of nature. The purpose of our discussion is then simply to show you that the correct fundamental quantum mechanical equation (16.13) has the same form you get for the limiting case of an electron moving along a line of atoms. This means that we can think of the differential equation in (16.13) as describing the diffusion of a probability amplitude from one point to the next along the line. That is, if an electron has a certain amplitude to be at one point, it will, a little time later, have some amplitude to be at neighboring points. In fact, the equation looks something like the diffusion equations which we have used in Volume I. But there is one main difference: the imaginary coefficient in front of the time derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of Eq. (16.13) are complex waves. |