A Briefer History Of Time - BestLightNovel.com
You’re reading novel A Briefer History Of Time Part 2 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy
8.
THE BIG BANG, BLACK HOLES, AND THE EVOLUTION OF THE UNIVERSE.
IN FRIEDMANN'S FIRST MODEL of the universe, the fourth dimension, time-like s.p.a.ce-is finite in extent. It is like a line with two ends, or boundaries. So time has an end, and it also has a beginning. In fact, all solutions to Einstein's equations in which the universe has the amount of matter we observe share one very important feature: at some time in the past (about 13.7 billion years ago), the distance between neighboring galaxies must have been zero. In other words, the entire universe was squashed into a single point with zero size, like a sphere of radius zero. At that time, the density of the universe and the curvature of s.p.a.ce-time would have been infinite. It is the time that we call the big bang.
All our theories of cosmology are formulated on the a.s.sumption that s.p.a.ce-time is smooth and nearly flat. That means that all our theories break down at the big bang: a s.p.a.ce-time with infinite curvature can hardly be called nearly flat! Thus even if there were events before the big bang, we could not use them to determine what would happen afterward, because predictability would have broken down at the big bang.
Correspondingly, if, as is the case, we know only what has happened since the big bang, we cannot determine what happened beforehand. As far as we are concerned, events before the big bang can have no consequences and so should not form part of a scientific model of the universe. We should therefore cut them out of the model and say that the big bang was the beginning of time. This means that questions such as who set up the conditions for the big bang are not questions that science addresses.
Another infinity that arises if the universe has zero size is in temperature. At the big bang itself, the universe is thought to have been infinitely hot. As the universe expanded, the temperature of the radiation decreased. Since temperature is simply a measure of the average energy-or speed-of particles, this cooling of the universe would have a major effect on the matter in it. At very high temperatures, particles would be moving around so fast that they could escape any attraction toward each other resulting from nuclear or electromagnetic forces, but as they cooled off, we would expect particles that attract each other to start to clump together. Even the types of particles that exist in the universe depend on the temperature, and hence on the age, of the universe.
Aristotle did not believe that matter was made of particles. He believed that matter was continuous. That is, according to him, a piece of matter could be divided into smaller and smaller bits without any limit: there could never be a grain of matter that could not be divided further. A few Greeks, however, such as Democritus, held that matter was inherently grainy and that everything was made up of large numbers of various different kinds of atoms. (The word atom means "indivisible" in Greek.) We now know that this is true-at least in our environment, and in the present state of the universe. But the atoms of our universe did not always exist, they are not indivisible, and they represent only a small fraction of the types of particles in the universe.
Atoms are made of smaller particles: electrons, protons, and neutrons. The protons and neutrons themselves are made of yet smaller particles called quarks. In addition, corresponding to each of these subatomic particles there exists an antiparticle. Antiparticles have the same ma.s.s as their sibling particles but are opposite in their charge and other attributes. For instance, the antiparticle for an electron, called a positron, has a positive charge, the opposite of the charge of the electron. There could be whole antiworlds and antipeople made out of antiparticles. However, when an antiparticle and particle meet, they annihilate each other. So if you meet your antiself, don't shake hands-you would both vanish in a great flash of light!
Light energy comes in the form of another type of particle, a ma.s.sless particle called a photon. The nearby nuclear furnace of the sun is the greatest source of photons for the earth. The sun is also a huge source of another kind of particle, the aforementioned neutrino (and antineutrino). But these extremely light particles hardly ever interact with matter, and hence they pa.s.s through us without effect, at a rate of billions each second. All told, physicists have discovered dozens of these elementary particles. Over time, as the universe has undergone a complex evolution, the makeup of this zoo of particles has also evolved. It is this evolution that has made it possible for planets such as the earth, and beings such as we, to exist.
One second after the big bang, the universe would have expanded enough to bring its temperature down to about ten billion degrees Celsius. This is about a thousand times the temperature at the center of the sun, but temperatures as high as this are reached in H-bomb explosions. At this time the universe would have contained mostly photons, electrons, and neutrinos, and their antiparticles, together with some protons and neutrons. These particles would have had so much energy that when they collided, they would have produced many different particle/antiparticle pairs. For instance, colliding photons might produce an electron and its antiparticle, the positron. Some of these newly produced particles would collide with an antiparticle sibling and be annihilated. Any time an electron meets up with a positron, both will be annihilated, but the reverse process is not so easy: in order for two ma.s.sless particles such as photons to create a particle/antiparticle pair such as an electron and a positron, the colliding ma.s.sless particles must have a certain minimum energy. That is because an electron and positron have ma.s.s, and this newly created ma.s.s must come from the energy of the colliding particles. As the universe continued to expand and the temperature to drop, collisions having enough energy to create electron/positron pairs would occur less often than the rate at which the pairs were being destroyed by annihilation. So eventually most of the electrons and positrons would have annihilated each other to produce more photons, leaving only relatively few electrons. The neutrinos and antineutrinos, on the other hand, interact with themselves and with other particles only very weakly, so they would not annihilate each other nearly as quickly. They should still be around today. If we could observe them, it would provide a good test of this picture of a very hot early stage of the universe, but unfortunately, after billions of years their energies would now be too low for us to observe them directly (though we might be able to detect them indirectly).
Photon/Electron/Positron Equilibrium.
In the early universe, there was a balance between pairs of electrons and positrons colliding to create photons, and the reverse process As the temperature of the universe dropped, the balance was altered to favor photon creation. Eventually most electrons and positrons in the universe annihilated each other, leaving only the relatively few electrons present today About one hundred seconds after the big bang, the temperature of the universe would have fallen to one billion degrees, the temperature inside the hottest stars. At this temperature, a force called the strong force would have played an important role. The strong force, which we will discuss in more detail in Chapter 11, is a short-range attractive force that can cause protons and neutrons to bind to each other, forming nuclei. At high enough temperatures, protons and neutrons have enough energy of motion (see Chapter 5) that they can emerge from their collisions still free and independent. But at one billion degrees, they would no longer have had sufficient energy to overcome the attraction of the strong force, and they would have started to combine to produce the nuclei of atoms of deuterium (heavy hydrogen), which contain one proton and one neutron. The deuterium nuclei would then have combined with more protons and neutrons to make helium nuclei, which contain two protons and two neutrons, and also small amounts of a couple of heavier elements, lithium and beryllium. One can calculate that in the hot big bang model, about a quarter of the protons and neutrons would have been converted into helium nuclei, along with a small amount of heavy hydrogen and other elements. The remaining neutrons would have decayed into protons, which are the nuclei of ordinary hydrogen atoms.
This picture of a hot early stage of the universe was first put forward by the scientist George Gamow (see page 61) in a famous paper written in 1948 with a student of his, Ralph Alpher. Gamow had quite a sense of humor-he persuaded the nuclear scientist Hans Bethe to add his name to the paper to make the list of authors Alpher, Bethe, Gamow, like the first three letters of the Greek alphabet, alpha, beta, gamma, and particularly appropriate for a paper on the beginning of the universe! In this paper they made the remarkable prediction that radiation (in the form of photons) from the very hot early stages of the universe should still be around today, but with its temperature reduced to only a few degrees above absolute zero. (Absolute zero, -273 degrees Celsius, is the temperature at which substances contain no heat energy, and is thus the lowest possible temperature.) It was this microwave radiation that Penzias and Wilson found in 1965. At the time that Alpher, Bethe, and Gamow wrote their paper, not much was known about the nuclear reactions of protons and neutrons. Predictions made for the proportions of various elements in the early universe were therefore rather inaccurate, but these calculations have been repeated in the light of better knowledge and now agree very well with what we observe. It is, moreover, very difficult to explain in any other way why about one-quarter of the ma.s.s of the universe is in the form of helium.
But there are problems with this picture. In the hot big bang model there was not enough time in the early universe for heat to have flowed from one region to another. This means that the initial state of the universe would have to have had exactly the same temperature everywhere in order to account for the fact that the microwave background has the same temperature in every direction we look. Moreover, the initial rate of expansion would have had to be chosen very precisely for the rate of expansion still to be so close to the critical rate needed to avoid collapse. It would be very difficult to explain why the universe should have begun in just this way, except as the act of a G.o.d who intended to create beings like us. In an attempt to find a model of the universe in which many different initial configurations could have evolved to something like the present universe, a scientist at the Ma.s.sachusetts Inst.i.tute of Technology, Alan Guth, suggested that the early universe might have gone through a period of very rapid expansion. This expansion is said to be inflationary, meaning that the universe at one time expanded at an increasing rate. According to Guth, the radius of the universe increased by a million million million million million-1 with thirty zeros after it-times in only a tiny fraction of a second. Any irregularities in the universe would have been smoothed out by this expansion, just as the wrinkles in a balloon are smoothed away when you blow it up. In this way, inflation explains how the present smooth and uniform state of the universe could have evolved from many different nonuniform initial states. So we are therefore fairly confident that we have the right picture, at least going back to about one-billion-trillion-trillionth of a second after the big bang.
After all this initial turmoil, within only a few hours of the big bang, the production of helium and some other elements such as lithium would have stopped. And after that, for the next million years or so, the universe would have just continued expanding, without anything much happening. Eventually, once the temperature had dropped to a few thousand degrees and electrons and nuclei no longer had enough energy of motion to overcome the electromagnetic attraction between them, they would have started combining to form atoms. The universe as a whole would have continued expanding and cooling, but in regions that were slightly denser than average, this expansion would have been slowed down by the extra gravitational attraction.
This attraction would eventually stop expansion in some regions and cause them to start to collapse. As they were collapsing, the gravitational pull of matter outside these regions might start them rotating slightly. As the collapsing region got smaller, it would spin faster-just as skaters spinning on ice spin faster as they draw in their arms. Eventually, when the region got small enough, it would be spinning fast enough to balance the attraction of gravity, and in this way disklike rotating galaxies were born. Other regions that did not happen to pick up a rotation would become oval objects called elliptical galaxies. In these, the region would stop collapsing because individual parts of the galaxy w ould be orbiting stably around its center, but the galaxy would have no overall rotation.
As time went on, the hydrogen and helium gas in the galaxies would break up into smaller clouds that would collapse under their own gravity. As these contracted and the atoms within them collided with one another, the temperature of the gas would increase, until eventually it became hot enough to start nuclear fusion reactions. These would convert the hydrogen into more helium. The heat released in this reaction, which is like a controlled hydrogen bomb explosion, is what makes a star s.h.i.+ne. This additional heat also increases the pressure of the gas until it is sufficient to balance the gravitational attraction, and the gas stops contracting. In this manner, these clouds coalesce into stars like our sun, burning hydrogen into helium and radiating the resulting energy as heat and light. It is a bit like a balloon-there is a balance between the pressure of the air inside, which is trying to make the balloon expand, and the tension in the rubber, which is trying to make the balloon smaller.
Once clouds of hot gas coalesce into stars, the stars will remain stable for a long time, with heat from the nuclear reactions balancing the gravitational attraction. Eventually, however, the star will run out of its hydrogen and other nuclear fuels. Paradoxically, the more fuel a star starts off with, the sooner it runs out. This is because the more ma.s.sive the star is, the hotter it needs to be to balance its gravitational attraction. And the hotter the star, the faster the nuclear fusion reaction and the sooner it will use up its fuel. Our sun has probably got enough fuel to last another five billion years or so, but more ma.s.sive stars can use up their fuel in as little as one hundred million years, much less than the age of the universe.
When a star runs out of fuel, it starts to cool off and gravity takes over, causing it to contract. This contraction squeezes the atoms together and causes the star to become hotter again. As the star heats up further, it would start to convert helium into heavier elements such as carbon or oxygen. This, however, would not release much more energy, so a crisis would occur. What happens next is not completely clear, but it seems likely that the central regions of the star would collapse to a very dense state, such as a black hole. The term "black hole" is of very recent origin. It was coined in 1969 by the American scientist John Wheeler as a graphic description of an idea that goes back at least two hundred years, to a time when there were two theories about light: one, which Newton favored, was that it was composed of particles, and the other was that it was made of waves. We now know that actually, both theories are correct. As we will see in Chapter 9, by the wave/particle duality of quantum mechanics, light can be regarded as both a wave and a particle. The descriptors wave and particle particle are concepts humans created, not necessarily concepts that nature is obliged to respect by making all phenomena fall into one category or the other! are concepts humans created, not necessarily concepts that nature is obliged to respect by making all phenomena fall into one category or the other!
Under the theory that light is made up of waves, it was not clear how it would respond to gravity. But if we think of light as being composed of particles, we might expect those particles to be affected by gravity in the same way that cannonb.a.l.l.s, rockets, and planets are. In particular, if you shoot a cannonball upward from the surface of the earth-or a star-like the rocket on page 58, it will eventually stop and then fall back unless the speed with which it starts upward exceeds a certain value. This minimum speed is called the escape velocity. The escape velocity of a star depends on the strength of its gravitational pull. The more ma.s.sive the star, the greater its escape velocity. At first people thought that particles of light traveled infinitely fast, so gravity would not have been able to slow them down, but the discovery by Roemer that light travels at a finite speed meant that gravity might have an important effect: if the star is ma.s.sive enough, the speed of light will be less than the star's escape velocity, and all light emitted by the star will fall back into it. On this a.s.sumption, in 1783 a Cambridge don, John Mich.e.l.l, published a paper in the Philosophical Transactions of the Royal Society Transactions of the Royal Society of London in which he pointed out that a star that was sufficiently ma.s.sive and compact would have such a strong gravitational field that light could not escape: any light emitted from the surface of the star would be dragged back by the star's gravitational attraction before it could get very far. Such objects are what we now call black holes, because that is what they are: black voids in s.p.a.ce. of London in which he pointed out that a star that was sufficiently ma.s.sive and compact would have such a strong gravitational field that light could not escape: any light emitted from the surface of the star would be dragged back by the star's gravitational attraction before it could get very far. Such objects are what we now call black holes, because that is what they are: black voids in s.p.a.ce.
Cannonb.a.l.l.s Above and Below Escape Velocity.
What goes up need not come down-if it is shot upward faster than the escape velocity.
A similar suggestion was made a few years later by a French scientist, the Marquis de Laplace, apparently independent of Mich.e.l.l. Interestingly, Laplace included it only in the first and second editions of his book The System of the World, The System of the World, leaving it out of later editions. Perhaps he decided it was a crazy idea-the particle theory of light went out of favor during the nineteenth century because it seemed that everything could be explained using the wave theory. In fact, it is not really consistent to treat light like cannonb.a.l.l.s in Newton's theory of gravity because the speed of light is fixed. A cannonball fired upward from the earth will be slowed down by gravity and will eventually stop and fall back; a photon, however, must continue upward at a constant speed. A consistent theory of how gravity affects light did not come along until Einstein proposed general relativity in 1915, and the problem of understanding what would happen to a ma.s.sive star, according to general relativity, was first solved by a young American, Robert Oppenheimer, in 1939. leaving it out of later editions. Perhaps he decided it was a crazy idea-the particle theory of light went out of favor during the nineteenth century because it seemed that everything could be explained using the wave theory. In fact, it is not really consistent to treat light like cannonb.a.l.l.s in Newton's theory of gravity because the speed of light is fixed. A cannonball fired upward from the earth will be slowed down by gravity and will eventually stop and fall back; a photon, however, must continue upward at a constant speed. A consistent theory of how gravity affects light did not come along until Einstein proposed general relativity in 1915, and the problem of understanding what would happen to a ma.s.sive star, according to general relativity, was first solved by a young American, Robert Oppenheimer, in 1939.
The picture that we now have from Oppenheimer's work is as follows. The gravitational field of the star changes the paths of pa.s.sing light rays in s.p.a.ce-time from what they would have been had the star not been present. This is the effect that is seen in the bending of light from distant stars observed during an eclipse of the sun. The paths followed in s.p.a.ce and time by light are bent slightly inward near the surface of the star. As the star contracts, it becomes denser, so the gravitational field at its surface gets stronger. (You can think of the gravitational field as emanating from a point at the center of the star; as the star shrinks, points on its surface get closer to the center, so they feel a stronger field.) The stronger field makes light paths near the surface bend inward more. Eventually, when the star has shrunk to a certain critical radius, the gravitational field at the surface becomes so strong that the light paths are bent inward to the point that light can no longer escape.
According to the theory of relativity, nothing can travel faster than light. Thus if light cannot escape, neither can anything else; everything is dragged back by the gravitational field. The collapsed star has formed a region of s.p.a.ce-time around it from which it is not possible to escape to reach a distant observer. This region is the black hole. The outer boundary of a black hole is called the event horizon. Today, thanks to the Hubble s.p.a.ce Telescope and other telescopes that focus on X-rays and gamma rays rather than visible light, we know that black holes are common phenomena-much more common than people first thought. One satellite discovered fifteen hundred black holes in just one small area of sky. We have also discovered a black hole in the center of our galaxy, with a ma.s.s more than one million times that of our sun. That superma.s.sive black hole has a star orbiting it at about 2 percent the speed of light, faster than the average speed of an electron orbiting the nucleus in an atom!
In order to understand what you would see if you were watching a ma.s.sive star collapse to form a black hole, it is necessary to remember that in the theory of relativity there is no absolute time. In other words, each observer has his own measure of time. The pa.s.sage of time for someone on a star's surface will be different from that for someone at a distance, because the gravitational field is stronger on the star's surface.
Suppose an intrepid astronaut is on the surface of a collapsing star and stays on the surface as it collapses inward. At some time on his watch-say, 11:00-the star would shrink below the critical radius at which the gravitational field becomes so strong that nothing can escape. Now suppose his instructions are to send a signal every second, according to his watch, to a s.p.a.ces.h.i.+p above, which orbits at some fixed distance from the center of the star. He begins transmitting at 10:59:58, that is, two seconds before 11:00. What will his companions on the s.p.a.ces.h.i.+p record?
We learned from our earlier thought experiment aboard the rocket s.h.i.+p that gravity slows time, and the stronger the gravity, the greater the effect. The astronaut on the star is in a stronger gravitational field than his companions in orbit, so what to him is one second will be more than one second on their clocks. And as he rides the star's collapse inward, the field he experiences will grow stronger and stronger, so the interval between his signals will appear successively longer to those on the s.p.a.ces.h.i.+p. This stretching of time would be very small before 10:59:59, so the orbiting astronauts would have to wait only very slightly more than a second between the astronaut's 10:59:58 signal and the one that he sent when his watch read 10:59:59. But they would have to wait forever for the 11:00 signal.
Everything that happens on the surface of the star between 10:59:59 and 11:00 (by the astronaut's watch) would be spread out over an infinite period of time, as seen from the s.p.a.ces.h.i.+p. As 11:00 approached, the time interval between the arrival of successive crests and troughs of any light from the star would get successively longer, just as the interval between signals from the astronaut does. Since the frequency of light is a measure of the number of its crests and troughs per second, to those on the s.p.a.ces.h.i.+p the frequency of the light from the star will get successively lower. Thus its light would appear redder and redder (and fainter and fainter). Eventually, the star would be so dim that it could no longer be seen from the s.p.a.ces.h.i.+p: all that would be left would be a black hole in s.p.a.ce. It would, however, continue to exert the same gravitational force on the s.p.a.ces.h.i.+p, which would continue to orbit.
This scenario is not entirely realistic, however, because of the following problem. Gravity gets weaker the farther you are from the star, so the gravitational force on our intrepid astronaut's feet would always be greater than the force on his head. This difference in the forces would stretch him out like spaghetti or tear him apart before the star had contracted to the critical radius at which the event horizon formed! However, we believe that there are much larger objects in the universe, such as the central regions of galaxies, which can also undergo gravitational collapse to produce black holes, like the superma.s.sive black hole at the center of our galaxy. An astronaut on one of these would not be torn apart before the black hole formed. He would not, in fact, feel anything special as he reached the critical radius, and he could pa.s.s the point of no return without noticing it- though to those on the outside, his signals would again become further and further apart, and eventually stop. And within just a few hours (as measured by the astronaut), as the region continued to collapse, the difference in the gravitational forces on his head and his feet would become so strong that again it would tear him apart.
Tidal Forces.
Since gravity weakens with distance, the earth pulls on your head with less force than it pulls on your feet, which are a meter or two closer to the earth's center The difference is so tiny we cannot feel it, but an astronaut near the surface of a black hole would be literally torn apart.
Sometimes, when a very ma.s.sive star collapses, the outer regions of the star may get blown off in a tremendous explosion called a supernova. A supernova explosion is so huge that it can give off more light than all the other stars in its galaxy combined. One example of this is the supernova whose remnants we see as the Crab Nebula. The Chinese recorded it in 1054. Though the star that exploded was five thousand light-years away, it was visible to the naked eye for months and shone so brightly that you could see it even during the day and read by it at night. A supernova five hundred light-years away- one-tenth as far-would be one hundred times brighter and could literally turn night into day. To understand the violence of such an explosion, just consider that its light would rival that of the sun, even though it is tens of millions of times farther away. (Recall that our sun resides at the neighborly distance of eight light-minutes.) If a supernova were to occur close enough, it could leave the earth intact but still emit enough radiation to kill all living things. In fact, it was recently proposed that a die-off of marine creatures that occurred at the interface of the Pleistocene and Pliocene epochs about two million years ago was caused by cosmic ray radiation from a supernova in a nearby cl.u.s.ter of stars called the Scorpius-Centaurus a.s.sociation. Some scientists believe that advanced life is likely to evolve only in regions of galaxies in which there are not too many stars-"zones of life"- because in denser regions phenomena such as supernovas would be common enough to regularly snuff out any evolutionary beginnings. On the average, hundreds of thousands of supernovas explode somewhere in the universe each day. A supernova happens in any particular galaxy about once a century. But that's just the average. Unfortunately-for astronomers at least-the last supernova recorded in the Milky Way occurred in 1604, before the invention of the telescope.
The leading candidate for the next supernova explosion in our galaxy is a star called Rho Ca.s.siopeiae. Fortunately, it is a safe and comfortable ten thousand light-years from us. It is in a cla.s.s of stars known as yellow hypergiants, one of only seven known yellow hypergiants in the Milky Way. An international team of astronomers began to study this star in 1993. In the next few years they observed it undergoing periodic temperature fluctuations of a few hundred degrees. Then in the summer of 2000, its temperature suddenly plummeted from around 7,000 degrees to 4,000 degrees Celsius. During that time, they also detected t.i.tanium oxide in the star's atmosphere, which they believe is part of an outer layer thrown off from the star by a ma.s.sive shock wave.
In a supernova, some of the heavier elements produced near the end of the star's life are flung back into the galaxy and provide some of the raw material for the next generation of stars. Our own sun contains about 2 percent of these heavier elements. It is a second- or third-generation star, formed some five billion years ago out of a cloud of rotating gas containing the debris of earlier supernovas. Most of the gas in that cloud went to form the sun or got blasted away, but small amounts of the heavier elements collected together to form the bodies that now orbit the sun as planets like the earth. The gold in our jewelry and the uranium in our nuclear reactors are both remnants of the supernovas that occurred before our solar system was born!
When the earth was newly condensed, it was very hot and without an atmosphere. In the course of time, it cooled and acquired an atmosphere from the emission of gases from the rocks. This early atmosphere was not one in which we could have survived. It contained no oxygen, but it did contain a lot of other gases that are poisonous to us, such as hydrogen sulfide (the gas that gives rotten eggs their smell). There are, however, other primitive forms of life that can flourish under such conditions. It is thought that they developed in the oceans, possibly as a result of chance combinations of atoms into large structures, called macromolecules, that were capable of a.s.sembling other atoms in the ocean into similar structures. They would thus have reproduced themselves and multiplied. In some cases there would be errors in the reproduction. Mostly these errors would have been such that the new macromolecule could not reproduce itself and eventually would have been destroyed. However, a few of the errors would have produced new macromolecules that were even better at reproducing themselves. They would have therefore had an advantage and would have tended to replace the original macromolecules. In this way a process of evolution was started that led to the development of more and more complicated, self-reproducing organisms. The first primitive forms of life consumed various materials, including hydrogen sulfide, and released oxygen. This gradually changed the atmosphere to the composition that it has today, and allowed the development of higher forms of life such as fish, reptiles, mammals, and ultimately the human race.
The twentieth century saw man's view of the universe transformed: we realized the insignificance of our own planet in the vastness of the universe, and we discovered that time and s.p.a.ce were curved and inseparable, that the universe was expanding, and that it had a beginning in time.
The picture of a universe that started off very hot and cooled as it expanded was based on Einstein's theory of gravity, general relativity. That it is in agreement with all the observational evidence that we have today is a great triumph for that theory. Yet because mathematics cannot really handle infinite numbers, by predicting that the universe began with the big bang, a time when the density of the universe and the curvature of s.p.a.ce-time would have been infinite, the theory of general relativity predicts that there is a point in the universe where the theory itself breaks down, or fails. Such a point is an example of what mathematicians call a singularity. When a theory predicts singularities such as infinite density and curvature, it is a sign that the theory must somehow be modified. General relativity is an incomplete theory because it cannot tell us how the universe started off.
In addition to general relativity, the twentieth century also sp.a.w.ned another great partial theory of nature, quantum mechanics. That theory deals with phenomena that occur on very small scales. Our picture of the big bang tells us that there must have been a time in the very early universe when the universe was so small that, even when studying its large-scale structure, it was no longer possible to ignore the small-scale effects of quantum mechanics. We will see in the next chapter that our greatest hope for obtaining a complete understanding of the universe from beginning to end arises from combining these two partial theories into a single quantum theory of gravity, a theory in which the ordinary laws of science hold everywhere, including at the beginning of time, without the need for there to be any singularities.
9.
QUANTUM GRAVITY.
THE SUCCESS OF SCIENTIFIC THEORIES, particularly Newton's theory of gravity, led the Marquis de Laplace at the beginning of the nineteenth century to argue that the universe was completely deterministic. Laplace believed that there should be a set of scientific laws that would allow us-at least in principle-to predict everything that would happen in the universe. The only input these laws would need is the complete state of the universe at any one time. This is called an initial condition or a boundary condition. (A boundary can mean a boundary in s.p.a.ce or time; a boundary condition in s.p.a.ce is the state of the universe at its outer boundary-if it has one.) Based on a complete set of laws and the appropriate initial or boundary condition, Laplace believed, we should be able to calculate the complete state of the universe at any time.
The requirement of initial conditions is probably intuitively obvious: different states of being at present will obviously lead to different future states. The need for boundary conditions in s.p.a.ce is a little more subtle, but the principle is the same. The equations on which physical theories are based can generally have very different solutions, and you must rely on the initial or boundary conditions to decide which solutions apply. It's a little like saying that your bank account has large amounts going in and out of it. Whether you end up bankrupt or rich depends not only on the sums paid in and out but also on the boundary or initial condition of how much was in the account to start with.
If Laplace were right, then, given the state of the universe at the present, these laws would tell us the state of the universe in both the future and the past. For example, given the positions and speeds of the sun and the planets, we can use Newton's laws to calculate the state of the solar system at any later or earlier time. Determinism seems fairly obvious in the case of the planets-after all, astronomers are very accurate in their predictions of events such as eclipses. But Laplace went further to a.s.sume that there were similar laws governing everything else, including human behavior.
Is it really possible for scientists to calculate what all our actions will be in the future? A gla.s.s of water contains more than 1024 molecules (a 1 followed by twenty-four zeros). In practice we can never hope to know the state of each of these molecules, much less the complete state of the universe or even of our bodies. Yet to say that the universe is deterministic means that even if we don't have the brainpower to do the calculation, our futures are nevertheless predetermined. molecules (a 1 followed by twenty-four zeros). In practice we can never hope to know the state of each of these molecules, much less the complete state of the universe or even of our bodies. Yet to say that the universe is deterministic means that even if we don't have the brainpower to do the calculation, our futures are nevertheless predetermined.
This doctrine of scientific determinism was strongly resisted by many people, who felt that it infringed G.o.d's freedom to make the world run as He saw fit. But it remained the standard a.s.sumption of science until the earl) years of the twentieth century. One of the first indications that this belief would have to be abandoned came when the British scientists Lord Rayleigh and Sir James Jeans calculated the amount of blackbody radiation that a hot object such as a star must radiate. (As noted in Chapter 7, any material body, when heated, will give off blackbody radiation.) According to the laws we believed at the time, a hot body ought to give off electromagnetic waves equally at all frequencies. If this were true, then it would radiate an equal amount of energy in every color of the spectrum of visible light, and for all frequencies of microwaves, radio waves, X-rays, and so on. Recall that the frequency of a wave is the number of times per second that the wave oscillates up and down, that is, the number of waves per second. Mathematically, for a hot body to give off waves equally at all frequencies means that a hot body should radiate the same amount of energy in waves with frequencies between zero and one million waves per second as it does in waves with frequencies between one million and two million waves per second, two million and three million waves per second, and so forth, going on forever. Let's say that one unit of energy is radiated in waves with frequencies between zero and one million waves per second, and in waves with frequencies between one million and two million waves per second, and so on. The total amount of energy radiated in all frequencies would then be the sum 1 plus 1 plus 1 plus ... going on forever. Since the number of waves per second in a wave is unlimited, the sum of energies is an unending sum. According to this reasoning, the total energy radiated should be infinite.
In order to avoid this obviously ridiculous result, the German scientist Max Planck suggested in 1900 that light, X-rays, and other electromagnetic waves could be given off only in certain discrete packets, which he called quanta. Today, as mentioned in Chapter 8, we call a quantum of light a photon. The higher the frequency of light, the greater its energy content. Therefore, though photons of any given color or frequency are all identical, Planck's theory states that photons of different frequencies are different in that they carry different amounts of energy. This means that in quantum theory the faintest light of any given color-the light carried by a single photon-has an energy content that depends upon its color. For example, since violet light has twice the frequency of red light, one quantum of violet light has twice the energy content of one quantum of red light. Thus the smallest possible bit of violet light energy is twice as large as the smallest possible bit of red light energy.
How does this solve the blackbody problem? The smallest amount of electromagnetic energy a blackbody can emit in any given frequency is that carried by one photon of that frequency. The energy of a photon is greater at higher frequencies. Thus the smallest amount of energy a blackbody can emit is higher at higher frequencies. At high enough frequencies, the amount of energy in even a single quantum will be more than a body has available, in which case no light will be emitted, ending the previously unending sum. Thus in Planck's theory, the radiation at high frequencies would be reduced, so the rate at which the body lost energy would be finite, solving the blackbody problem.
The quantum hypothesis explained the observed rate of emission of radiation from hot bodies very well, but its implications for determinism were not realized until 1926, when another German scientist, Werner Heisenberg, formulated his famous uncertainty principle.
Faintest Possible Light.
Faint light means fewer photons.The faintest possible light of any color is the light carried by a single photon.
The uncertainty principle tells us that, contrary to Laplace's belief, nature does impose limits on our ability to predict the future using scientific law. This is because, in order to predict the future position and velocity of a particle, one has to be able to measure its initial state-that is, its present position and its velocity-accurately. The obvious way to do this is to s.h.i.+ne light on the particle. Some of the waves of light will be scattered by the particle. These can be detected by the observer and will indicate the particle's position. However, light of a given wavelength has only limited sensitivity: you will not be able to determine the position of the particle more accurately than the distance between the wave crests of the light. Thus, in order to measure the position of the particle precisely, it is necessary to use light of a short wavelength, that is, of a high frequency. By Planck's quantum hypothesis, though, you cannot use an arbitrarily small amount of light: you have to use at least one quantum, whose energy is higher at higher frequencies. Thus, the more accurately you wish to measure the position of a particle, the more energetic the quantum of light you must shoot at it.
According to quantum theory, even one quantum of light will disturb the particle: it will change its velocity in a way that cannot be predicted. And the more energetic the quantum of light you use, the greater the likely disturbance. That means that for more precise measurements of position, when you will have to employ a more energetic quantum, the velocity of the particle will be disturbed by a larger amount. So the more accurately you try to measure the position of the particle, the less accurately you can measure its speed, and vice versa. Heisenberg showed that the uncertainty in the position of the particle times the uncertainty in its velocity times the ma.s.s of the particle can never be smaller than a certain fixed quant.i.ty. That means, for instance, if you halve the uncertainty in position, you must double the uncertainty in velocity, and vice versa. Nature forever constrains us to making this trade-off.
How bad is this trade-off? That depends on the numerical value of the "certain fixed quant.i.ty" we mentioned above. That quant.i.ty is known as Planck's constant, and it is a very tiny number. Because Planck's constant is so tiny, the effects of the trade-off, and of quantum theory in general, are, like the effects of relativity, not directly noticeable in our everyday lives. (Though quantum theory does affect our lives-as the basis of such fields as, say, modern electronics.) For example, if we pinpoint the position of a Ping-Pong ball with a ma.s.s of one gram to within one centimeter in any direction, then we can pinpoint its speed to an accuracy far greater than we would ever need to know. But if we measure the position of an electron to an accuracy of roughly the confines of an atom, then we cannot know its speed more precisely than about plus or minus one thousand kilometers per second, which is not very precise at all.
The limit dictated by the uncertainty principle does not depend on the way in which you try to measure the position or velocity of the particle, or on the type of particle. Heisenberg's uncertainty principle is a fundamental, inescapable property of the world, and it has had profound implications for the way in which we view the world. Even after more than seventy years, these implications have not been fully appreciated by many philosophers and are still the subject of much controversy. The uncertainty principle signaled an end to Laplace's dream of a theory of science, a model of the universe that would be completely deterministic. We certainly cannot predict future events exactly if we cannot even measure the present state of the universe precisely!
We could still imagine that there is a set of laws that determine events completely for some supernatural being who, unlike us, could observe the present state of the universe without disturbing it. However, such models of the universe are not of much interest to us ordinary mortals. It seems better to employ the principle of economy known as Occam's razor and cut out all the features of the theory that cannot be observed. This approach led Heisenberg, Erwin Schrodinger, and Paul Dirac in the 1920s to reformulate Newton's mechanics into a new theory called quantum mechanics, based on the uncertainty principle. In this theory, particles no longer had separate, well-defined positions and velocities. Instead, they had a quantum state, which was a combination of position and velocity defined only within the limits of the uncertainty principle.
One of the revolutionary properties of quantum mechanics is that it does not predict a single definite result for an observation. Instead, it predicts a number of different possible outcomes and tells us how likely each of these is. That is to say, if you made the same measurement on a large number of similar systems, each of which started off in the same way, you would find that the result of the measurement would be A in a certain number of cases, B in a different number, and so on. You could predict the approximate number of times that the result would be A or B, but you could not predict the specific result of an individual measurement.
For instance, imagine you toss a dart toward a dartboard. According to cla.s.sical theories-that is, the old, nonquantum theories-the dart will either hit the bull's-eye or it will miss it. And if you know the velocity of the dart when you toss it, the pull of gravity, and other such factors, you'll be able to calculate whether it will hit or miss. But quantum theory tells us this is wrong, that you cannot say it for certain. Instead, according to quantum theory there is a certain probability that the dart will hit the bull's-eye, and also a nonzero probability that it will land in any other given area of the board. Given an object as large as a dart, if the cla.s.sical theory-in this case Newton's laws-says the dart will hit the bull's-eye, then you can be safe in a.s.suming it will. At least, the chances that it won't (according to quantum theory) are so small that if you went on tossing the dart in exactly the same manner until the end of the universe, it is probable that you would still never observe the dart missing its target. But on the atomic scale, matters are different. A dart made of a single atom might have a 90 percent probability of hitting the bull's-eye, with a 5 percent chance of hitting elsewhere on the board, and another 5 percent chance of missing it completely. You cannot say in advance which of these it will be. All you can say is that if you repeat the experiment many times, you can expect that, on average, ninety times out of each hundred times you repeat the experiment, the dart will hit the bull's-eye.
Quantum mechanics therefore introduces an unavoidable element of unpredictability or randomness into science. Einstein objected to this very strongly, despite the important role he had played in the development of these ideas. In fact, he was awarded the n.o.bel Prize for his contribution to quantum theory. Nevertheless, he never accepted that the universe was governed by chance; his feelings were summed up in his famous statement "G.o.d does not play dice."
Smeared Quantum Position.
According to quantum theory, one cannot pinpoint an object's position and velocity with infinite precision, nor can one predict exactly the course of future events.
The test of a scientific theory, as we have said, is its ability to predict the results of an experiment. Quantum theory limits our abilities. Does this mean quantum theory limits science? If science is to progress, the way we carry it on must be dictated by nature. In this case, nature requires that we redefine what we mean by prediction: We may not be able to predict the outcome of an experiment exactly, but we can repeat the experiment many times and confirm that the various possible outcomes occur within the probabilities predicted by quantum theory. Despite the uncertainty principle, therefore, there is no need to give up on the belief in a world governed by physical law. In tact, in the end, most scientists were willing to accept quantum mechanics precisely because it agreed perfectly with experiment.
One of the most important implications of Heisenberg's uncertainty principle is that particles behave in some respects like waves. As we have seen, they do not have a definite position but are "smeared out" with a certain probability distribution. Equally, although light is made up of waves, Planck's quantum hypothesis also tells us that in some ways light behaves as if it were composed of particles: it can be emitted or absorbed only in packets, or quanta. In fact, the theory of quantum mechanics is based on an entirely new type of mathematics that no longer describes the real world in terms of either particles or waves. For some purposes it is helpful to think of particles as waves and for other purposes it is better to think of waves as particles, but these ways of thinking are just conveniences. This is what physicists mean when they say there is a duality between waves and particles in quantum mechanics.
An important consequence of wavelike behavior in quantum mechanics is that one can observe what is called interference between two sets of particles. Normally, interference is thought of as a phenomenon of waves; that is to say, when waves collide, the crests of one set of waves may coincide with the troughs of the other set, in which case the waves are said to be out of phase. If that happens, the two sets of waves then cancel each other out, rather than adding up to a stronger wave, as one might expect. A familiar example of interference in the case of light is the colors that are often seen in soap bubbles. These are caused by reflection of light from the two sides of the thin film of water forming the bubble. White light consists of light waves of all different wavelengths, or colors. For certain wavelengths the crests of the waves reflected from one side of the soap film coincide with the troughs reflected from the other side. The colors corresponding to these wavelengths are absent from the reflected light, which therefore appears to be colored.
In and Out of Phase.
If the crests and troughs of two waves coincide, they result in a stronger wave, but if one wave's crests coincide with another's troughs, the two waves cancel each other.
But quantum theory tells us that interference can also occur for particles, because of the duality introduced by quantum mechanics. A famous example is the so-called two-slit experiment. Imagine a part.i.tion-a thin wall-with two narrow parallel slits in it. Before we consider what happens when particles are sent through these slits, let's examine what happens when light is s.h.i.+ned on them. On one side of the part.i.tion you place a source of light of a particular color (that is, of a particular wavelength). Most of the light will hit the part.i.tion, but a small amount will go through the slits. Now suppose you place a screen on the far side of the part.i.tion from the light. Any point on that screen will receive waves from both slits. However, in general, the distance the light has to travel from the light source to the point via one of the slits will be different than for the light traveling via the other slit. Since the distance traveled differs, the waves from the two slits will not be in phase with each other when they arrive at the point. In some places the troughs from one wave will coincide with the crests from the other, and the waves will cancel each other out; in other places the crests and troughs will coincide, and the waves will reinforce each other; and in most places the situation will be somewhere in between. The result is a characteristic pattern of light and dark.
The remarkable thing is that you get exactly the same kind of pattern if you replace the source of light by a source of particles, such as electrons, that have a definite speed. (According to quantum theory, if the electrons have a definite speed the corresponding matter waves have a definite wavelength.) Suppose you have only one slit and start firing electrons at the part.i.tion. Most of the electrons will be stopped by the part.i.tion, but some will go through the slit and make it to the screen on the other side. It might seem logical to a.s.sume that opening a second slit in the part.i.tion would simply increase the number of electrons. .h.i.tting each point of the screen. But if you open the second slit, the number of electrons. .h.i.tting the screen increases at some points and decreases at others, just as if the electrons were interfering as waves do, rather than acting as particles. (See ill.u.s.tration on page 97.)
Path Distances and Interference.
In the two-silt experiment, the distance that waves must travel from the top and bottom slits to the screen varies with height along the screen The result is that the waves reinforce each other at certain heights and cancel at others, forming an interference pattern
Electron Interference.
Because of interference, the result of sending a beam of electrons through two slits does not correspond to the result of sending the electrons through each slit separately.
Now imagine sending the electrons through the slits one at a time. Is there still interference? One might expect each electron to pa.s.s through one slit or the other, doing away with the interference pattern. In reality, however, even when the electrons are sent through one at a time, the interference pattern still appears. Each electron, therefore, must be pa.s.sing through both slits at the same time and interfering with itself!
The phenomenon of interference between particles has been crucial to our understanding of the structure of atoms, the basic units out of which we, and everything around us, are made. In the early twentieth century it was thought that atoms were rather like the planets...o...b..ting the sun, with electrons (particles of negative electricity) orbiting around a central nucleus, which carried positive electricity. The attraction between the positive and negative electricity was supposed to keep the electrons in their orbits in the same way that the gravitational attraction between the sun and the planets keeps the planets in their orbits. The trouble with this was that the cla.s.sical laws of mechanics and electricity, before quantum mechanics, predicted that electrons...o...b..ting in this manner would give off radiation. This would cause them to lose energy and thus spiral inward until they collided with the nucleus. This would mean that the atom, and indeed all matter, should rapidly collapse to a state of very high density, which obviously doesn't happen!
The Danish scientist Niels Bohr found a partial solution to this problem in 1913. He suggested that perhaps the electrons were not able to orbit at just am distance from the central nucleus but rather could orbit only at certain specified distances. Supposing that only one or two electrons could orbit at any one of these specified distances would solve the problem of the collapse, because once the limited number of inner orbits was full, the electrons could not spiral in any farther. This model explained quite well the structure of the simplest atom, hydrogen, which has only one electron orbiting around the nucleus. But it was not clear how to extend this model to more complicated atoms. Moreover, the idea of a limited set of allowed orbits seemed like a mere Band-Aid. It was a trick that worked mathematically, but no one knew why nature should behave that way, or what deeper law-if any-it represented. The new theory of quantum mechanics resolved this difficulty. It revealed that an electron orbiting around the nucleus could be thought of as a wave, with a wavelength that depended on its velocity. Imagine the wave circling the nucleus at specified distances, as Bohr had postulated. For certain orbits, the circ.u.mference of the orbit would correspond to a whole number (as opposed to a fractional number) of wavelengths of the electron. For these orbits the wave crest would be in the same position each time round, so the waves would reinforce each other. These orbits would correspond to Bohr's allowed orbits. However, for orbits whose lengths were not a whole number of wavelengths, each wave crest would eventually be canceled out by a trough as the electrons went round. These orbits would not be allowed. Bohr's law of allowed and forbidden orbits now had an explanation.
A nice way of visualizing the wave/particle duality is the so-called sum over histories introduced by the American scientist Richard Feynman. In this approach a particle is not supposed to have a single history or path in s.p.a.ce-time, as it would in a cla.s.sical, nonquantum theory. Instead it is supposed to go from point A to point B by every possible path. With each path between A and B, Feynman a.s.sociated a couple of numbers. One represents the amplitude, or size, of a wave. The other represents the phase, or position in the cycle (that is, whether it is at a crest or a trough or somewhere in between). The probability of a particle going from A to B is found by adding up the waves for all the paths connecting A and B. In general, if one compares a set of neighboring paths, the phases or positions in the cycle will differ greatly. This means that the waves a.s.sociated with these paths will almost exactly cancel each other out. However, for some sets of neighboring paths the phase will not vary much between paths, and the waves for these paths will not cancel out. Such paths correspond to Bohr's allowed orbits.
Waves in Atomic Orbits.
Niels Bohr imagined the atom as consisting of electron waves endlessly circling atomic nuclei In his picture, only orbits with circ.u.mferences corresponding to a whole number of electron wavelengths could survive without destructive interference With these ideas in concrete mathematical form, it was relatively straightforward to calculate the allowed orbits in more complicated atoms and even in molecules, which are made up of a number of atoms held together by electrons in orbits that go around more than one nucleus. Since the structure of molecules and their reactions with each other underlie all of chemistry and biology, quantum mechanics allows us in principle to predict nearly everything we see around us, within the limits set by the uncertainty principle. (In practice, however, we cannot solve the equations for any atom besides the simplest one, hydrogen, which has only one electron, and we use approximations and computers to a.n.a.lyze more complicated atoms and molecules.)
Many Electron Paths.
In Richard Feynman's formulation of quantum theory, a particle, such as this one moving from source to screen, takes every possible path.
Quantum theory has been an outstandingly successful theory and underlies nearly all of modern science and technology. It governs the behavior of transistors and integrated circuits, which are the essential components of electronic devices such as televisions and computers, and it is also the basis of modern chemistry and biology. The only areas of physical science into which quantum mechanics has not yet been properly incorporated are gravity and the large-scale structure of the universe: Einstein's general theory of relativity, as noted earlier, does not take account of the uncertainty principle of quantum mechanics, as it should for consistency with other theories.
As we saw in the last chapter, we already know that general relativity must be altered. By predicting points of infinite density-singularities-cla.s.sical (that is, nonquantum) general relativity predicts its own downfall, just as cla.s.sical mechanics predicted its downfall by suggesting that blackbodies should radiate infinite energy or that atoms should collapse to infinite density. And as with cla.s.sical mechanics, we hope to eliminate these unacceptable singularities by making cla.s.sical general relativity into a quantum theory-that is, by creating a quantum theory of gravity.
If general relativity is wrong, why have all experiments thus far supported it? The reason that we haven't yet noticed am discrepancy with observation is that all the gravitational fields that we normally experience are very weak. But as we have seen, the gravitational field should get very strong when all the matter and energy in the universe are squeezed into a small volume in the early universe. In the presence of such strong fields, the effects of quantum theory should be important.
Although we do not yet possess a quantum theory of gravity, w e do know a number of features we believe it should have. One is that it should incorporate Feynman's proposal to formulate quantum theory in terms of a sum over histories. A second feature that we believe must be part of any ultimate theory is Einstein's idea that the gravitational field is represented by curved s.p.a.ce-time: particles try to follow the nearest thing to a straight path in a curved s.p.a.ce, but because s.p.a.ce-time is not flat, their paths appear to be bent, as if by a gravitational field. When we apply Feynman's sum over histories to Einstein's view of gravity, the a.n.a.logue of the history of a particle is now a complete curved s.p.a.ce-time that represents the history of the whole universe.
In the cla.s.sical theory of gravity, there are only two possible ways the universe can behave: either it has existed for an infinite time, or else it had a beginning at a singularity at some finite time in the past. For reasons we discussed earlier, we believe that the universe has not existed forever. Yet if it had a beginning, according to cla.s.sical general relativity, in order to know which solution of Einstein's equations describes our universe, we must know its initial state-that is, exactly how the universe began. G.o.d may have originally decreed the laws of nature, but it appears that He has since left the universe to evolve according to them and does not now intervene in it. How did He choose the initial state or configuration of the universe? What were the boundary conditions at the beginning of time? In cla.s.sical general relativity this is a problem, because cla.s.sical general relativity breaks down at the beginning of the universe.
In the quantum theory of gravity, on the other hand, a new possibility arises that, if true, would remedy this problem. In the quantum theory, it is possible for s.p.a.ce-time to be finite in extent and yet to have no singularities that formed a boundary or edge. s.p.a.ce-time would be like the surface of the earth, only with two more dimensions. As was pointed out before, if you keep traveling in a certain direction on the surface of the earth, you never come up against an impa.s.sable barrier or fall over the edge, but eventually come back to where you started, without running into a singularity. So if this turns out to be the case, then the quantum theory of gravity has opened up a new possibility in w hich there would be no singularities at w hich the laws of science broke down.