BestLightNovel.com

The Singularity Is Near_ When Humans Transcend Biology Part 21

The Singularity Is Near_ When Humans Transcend Biology - BestLightNovel.com

You’re reading novel The Singularity Is Near_ When Humans Transcend Biology Part 21 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy

MOLLY 2004: I realize the word and the idea exist. But does it refer to anything that you believe in? I realize the word and the idea exist. But does it refer to anything that you believe in?

RAY: People mean lots of things by it. People mean lots of things by it.

MOLLY 2004: Do you believe in those things? Do you believe in those things?

RAY: It's not possible to believe all these things: G.o.d is an all-powerful conscious person looking over us, making deals, and getting angry quite a bit. Or He-It-is a pervasive life force underlying all beauty and creativity. Or G.o.d created everything and then stepped back.... It's not possible to believe all these things: G.o.d is an all-powerful conscious person looking over us, making deals, and getting angry quite a bit. Or He-It-is a pervasive life force underlying all beauty and creativity. Or G.o.d created everything and then stepped back....

MOLLY 2004: I understand, but do you believe in any of them? I understand, but do you believe in any of them?



RAY: I believe that the universe exists. I believe that the universe exists.

MOLLY 2004: Now wait a minute, that's not a belief, that's a scientific fact. Now wait a minute, that's not a belief, that's a scientific fact.

RAY: Actually, I don't know for sure' that anything exists other than my own thoughts. Actually, I don't know for sure' that anything exists other than my own thoughts.

MOLLY 2004: Okay, I understand that this is the philosophy chapter, but you can read scientific papers-thousands of them-that corroborate the existence of stars and galaxies. So, all those galaxies-we call that the universe. Okay, I understand that this is the philosophy chapter, but you can read scientific papers-thousands of them-that corroborate the existence of stars and galaxies. So, all those galaxies-we call that the universe.

RAY: Yes, I've heard of that, and I do recall reading some of these papers, but I don't know that those papers really exist, or that the things they refer to really exist, other than in my thoughts. Yes, I've heard of that, and I do recall reading some of these papers, but I don't know that those papers really exist, or that the things they refer to really exist, other than in my thoughts.

MOLLY 2004: So you don't acknowledge the existence of the universe? So you don't acknowledge the existence of the universe?

RAY: No, I just said that I do believe that it exists, but I'm pointing out that it's a belief That's my personal leap of faith. No, I just said that I do believe that it exists, but I'm pointing out that it's a belief That's my personal leap of faith.

MOLLY 2004: All right, but I asked whether you believed in G.o.d. All right, but I asked whether you believed in G.o.d.

RAY: Again, "G.o.d" is a word by which people mean different things. For the sake of your question, we can consider G.o.d to be the universe, and I said that I believe in the existence of the universe. Again, "G.o.d" is a word by which people mean different things. For the sake of your question, we can consider G.o.d to be the universe, and I said that I believe in the existence of the universe.

MOLLY 2004: G.o.d is just the universe? G.o.d is just the universe?

RAY: Just? It's a pretty big thing to apply the word "just" to. If we are to believe what science tells us-and I said that I do-it's about as big a phenomenon as we could imagine. Just? It's a pretty big thing to apply the word "just" to. If we are to believe what science tells us-and I said that I do-it's about as big a phenomenon as we could imagine.

MOLLY 2004: Actually, many physicists now consider our universe to be just one bubble among a vast number of other universes. But I meant that people usually mean something more by the word "G.o.d" than "just" the material world. Some people do a.s.sociate G.o.d with everything that exists, but they still consider G.o.d to be conscious. So you believe in a G.o.d that's not conscious? Actually, many physicists now consider our universe to be just one bubble among a vast number of other universes. But I meant that people usually mean something more by the word "G.o.d" than "just" the material world. Some people do a.s.sociate G.o.d with everything that exists, but they still consider G.o.d to be conscious. So you believe in a G.o.d that's not conscious?

RAY: The universe is not conscious-yet. But it will be. Strictly speaking, we should say that very little of it is conscious today. But that will change and soon. I expect that the universe will become sublimely intelligent and will wake up in Epoch Six. The only belief I am positing here is that the universe exists. If we make that leap of faith, the expectation that it will wake up is not so much a belief as an informed understanding, based on the same science that says there is a universe. The universe is not conscious-yet. But it will be. Strictly speaking, we should say that very little of it is conscious today. But that will change and soon. I expect that the universe will become sublimely intelligent and will wake up in Epoch Six. The only belief I am positing here is that the universe exists. If we make that leap of faith, the expectation that it will wake up is not so much a belief as an informed understanding, based on the same science that says there is a universe.

MOLLY 2004: Interesting. You know, that's essentially the opposite of the view that there was a conscious creator who got everything started and then kind of bowed out. You're basically saying that a conscious universe will "bow in" during Epoch Six. Interesting. You know, that's essentially the opposite of the view that there was a conscious creator who got everything started and then kind of bowed out. You're basically saying that a conscious universe will "bow in" during Epoch Six.

RAY: Yes, that's the essence of Epoch Six. Yes, that's the essence of Epoch Six.

CHAPTER EIGHT.

The Deeply Intertwined Promise and Peril of GNR

We are being propelled into this new century with no plan, no control, no brakes....The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.-BILL JOY, "WHY THE FUTURE DOESN'T NEED US"

Environmentalists must now grapple squarely with the idea of a world that has enough wealth and enough technological capability, and should not pursue more.-BILL MCKIBBEN, ENVIRONMENTALIST WHO FIRST WROTE ABOUT GLOBAL WARMING1 Progress might have been all right once, but it has gone on too long.-OGDEN NASH (19021971) In the late 1960s I was transformed into a radical environmental activist. A rag-tag group of activists and I sailed a leaky old halibut boat across the North Pacific to block the last hydrogen bomb tests under President Nixon. In the process I co-founded Greenpeace....Environmentalists were often able to produce arguments that sounded reasonable, while doing good deeds like saving whales and making the air and water cleaner. But now the chickens have come home to roost. The environmentalists' campaign against biotechnology in general, and genetic engineering in particular, has clearly exposed their intellectual and moral bankruptcy. By adopting a zero tolerance policy toward a technology with so many potential benefits for humankind and the environment, they ... have alienated themselves from scientists, intellectuals, and internationalists. It seems inevitable that the media and the public will, in time, see the insanity of their position.-PATRICK MOORE I think that ... flight from and hatred of technology is self-defeating. The Buddha rests quite as comfortably in the circuits of a digital computer and the gears of a cycle transmission as he does at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha-which is to demean oneself.-ROBERT M. PIRSIG, ZEN AND THE ART OF MOTORCYCLE MAINTENANCE ZEN AND THE ART OF MOTORCYCLE MAINTENANCE

Consider these articles we'd rather not see available on the Web:Impress Your Enemies: How to Build Your Own Atomic Bomb from Readily Available Materials2How to Modify the Influenza Virus in Your College Laboratory to Release Snake VenomTen Easy Modifications to the E. coli E. coli Virus VirusHow to Modify Smallpox to Counteract the Smallpox VaccineBuild Your Own Chemical Weapons from Materials Available on the InternetHow to Build a Pilotless, Self-Guiding, Low-Flying Airplane Using a Low-Cost Aircraft, GPS, and a Notebook Computer

Or, how about the following:

The Genomes of Ten Leading PathogensThe Floor Plans of Leading Skysc.r.a.persThe Layout of U.S. Nuclear ReactorsThe Hundred Top Vulnerabilities of Modern SocietyThe Top Ten Vulnerabilities of the Internet Personal Health Information on One Hundred Million AmericansThe Customer Lists of Top p.o.r.nography Sites

Anyone posting the first item above is almost certain to get a quick visit from the FBI, as did Nate Ciccolo, a fifteen-year-old high school student, in March 2000. For a school science project he built a papier-mache model of an atomic bomb that turned out to be disturbingly accurate. In the ensuing media storm Ciccolo told ABC News, "Someone just sort of mentioned, you know, you can go on the Internet now and get information. And I, sort of, wasn't exactly up to date on things. Try it. I went on there and a couple of clicks and I was right there.3 Of course Ciccolo didn't possess the key ingredient, plutonium, nor did he have any intention of acquiring it, but the report created shock waves in the media, not to mention among the authorities who worry about nuclear proliferation.

Ciccolo had reported finding 563 Web pages on atomic-bomb designs, and the publicity resulted in an urgent effort to remove them. Unfortunately, trying to get rid of information on the Internet is akin to trying to sweep back the ocean with a broom. Some of the sites continue to be easily accessible today. I won't provide any URLs in this book, but they are not hard to find.

Although the article t.i.tles above are fict.i.tious, one can find extensive information on the Internet about all of these topics.4 The Web is an extraordinary research tool. In my own experience, research that used to require a half day at the library can now be accomplished typically in a couple of minutes or less. The Web is an extraordinary research tool. In my own experience, research that used to require a half day at the library can now be accomplished typically in a couple of minutes or less.

This has enormous and obvious benefits for advancing beneficial technologies, but it can also empower those whose values are inimical to the mainstream of society. So are we in danger? The answer is clearly yes. How much danger, and what to do about it, are the subjects of this chapter.

My urgent concern with this issue dates back at least a couple of decades. When I wrote The Age of Intelligent Machines The Age of Intelligent Machines in the mid-1980s, I was deeply concerned with the ability of then-emerging genetic engineering to enable those skilled in the art and with access to fairly widely available equipment to modify bacterial and viral pathogens to create new diseases. in the mid-1980s, I was deeply concerned with the ability of then-emerging genetic engineering to enable those skilled in the art and with access to fairly widely available equipment to modify bacterial and viral pathogens to create new diseases.5 In destructive or merely careless hands these engineered pathogens could potentially combine a high degree of communicability, stealthiness, and destructiveness. In destructive or merely careless hands these engineered pathogens could potentially combine a high degree of communicability, stealthiness, and destructiveness.

Such efforts were not easy to carry out in the 1980s but were nonetheless feasible. We now know that bioweapons programs in the Soviet Union and elsewhere were doing exactly this.6 At the time I made a conscious decision to not talk about this specter in my book, feeling that I did not want to give the wrong people any destructive ideas. I didn't want to turn on the radio one day and hear about a disaster, with the perpetrators saying that they got the idea from Ray Kurzweil. At the time I made a conscious decision to not talk about this specter in my book, feeling that I did not want to give the wrong people any destructive ideas. I didn't want to turn on the radio one day and hear about a disaster, with the perpetrators saying that they got the idea from Ray Kurzweil.

Partly as a result of this decision I faced some reasonable criticism that the book emphasized the benefits of future technology while ignoring its pitfalls. When I wrote The Age of Spiritual Machines The Age of Spiritual Machines in 19971998, therefore, I attempted to account for both promise and peril. in 19971998, therefore, I attempted to account for both promise and peril.7 There had been sufficient public attention by that time (for example, the 1995 movie Outbreak, which portrays the terror and panic from the release of a new viral pathogen) that I felt comfortable to begin to address the issue publicly. There had been sufficient public attention by that time (for example, the 1995 movie Outbreak, which portrays the terror and panic from the release of a new viral pathogen) that I felt comfortable to begin to address the issue publicly.

In September 1998, having just completed the ma.n.u.script, I ran into Bill Joy, an esteemed and longtime colleague in the high-technology world, in a bar in Lake Tahoe. Although I had long admired Joy for his work in pioneering the leading software language for interactive Web systems (Java) and having cofounded Sun Microsystems, my focus at this brief get-together was not on Joy but rather on the third person sitting in our small booth, John Searle. Searle, the eminent philosopher from the University of California at Berkeley, had built a career of defending the deep mysteries of human consciousness from apparent attack by materialists such as Ray Kurzweil (a characterization I reject in the next chapter).

Searle and I had just finished debating the issue of whether a machine could be conscious during the closing session of George Gilder's Telecosm conference.

The session was ent.i.tled "Spiritual Machines" and was devoted to a discussion of the philosophical implications of my upcoming book. I had given Joy a preliminary ma.n.u.script and tried to bring him up to speed on the debate about consciousness that Searle and I were having.

As it turned out, Joy was interested in a completely different issue, specifically the impending dangers to human civilization from three emerging technologies I had presented in the book: genetics, nanotechnology, and robotics (GNR, as discussed earlier). My discussion of the downsides of future technology alarmed Joy, as he would later relate in his now-famous cover story for Wired Wired, "Why the Future Doesn't Need Us."8 In the article Joy describes how he asked his friends in the scientific and technology community whether the projections I was making were credible and was dismayed to discover how close these capabilities were to realization. In the article Joy describes how he asked his friends in the scientific and technology community whether the projections I was making were credible and was dismayed to discover how close these capabilities were to realization.

Joy's article focused entirely on the downside scenarios and created a firestorm. Here was one of the technology world's leading figures addressing new and dire emerging dangers from future technology. It was reminiscent of the attention that George Soros, the currency arbitrageur and archcapitalist, received when he made vaguely critical comments about the excesses of unrestrained capitalism, although the Joy controversy became far more intense. The New York Times The New York Times reported there were about ten thousand articles commenting on and discussing Joy's article, more than any other in the history of commentary on technology issues. My attempt to relax in a Lake Tahoe lounge thus ended up fostering two long-term debates, as my dialogue with John Searle has also continued to this day. reported there were about ten thousand articles commenting on and discussing Joy's article, more than any other in the history of commentary on technology issues. My attempt to relax in a Lake Tahoe lounge thus ended up fostering two long-term debates, as my dialogue with John Searle has also continued to this day.

Despite my being the origin of Joy's concern, my reputation as a "technology optimist" has remained intact, and Joy and I have been invited to a variety of forums to debate the peril and promise, respectively, of future technologies. Although I am expected to take up the "promise" side of the debate, I often end up spending most of my time defending his position on the feasibility of these dangers.

Many people have interpreted Joy's article as an advocacy of broad relinquishment, not of all technological developments, but of the "dangerous ones" like nanotechnology. Joy, who is now working as a venture capitalist with the legendary Silicon Valley firm of Kleiner, Perkins, Caufield & Byers, investing in technologies such as nanotechnology applied to renewable energy and other natural resources, says that broad relinquishment is a misinterpretation of his position and was never his intent. In a recent private e-mail communication, he says the emphasis should be on his call to "limit development of the technologies that are too dangerous" (see the epigraph at the beginning of this chapter), not on complete prohibition. He suggests, for example, a prohibition against self-replicating nanotechnology, which is similar to the guidelines advocated by the Foresight Inst.i.tute, founded by nanotechnology pioneer Eric Drexler and Christine Peterson. Overall, this is a reasonable guideline, although I believe there will need to be two exceptions, which I discuss below (see p. 411).

As another example, Joy advocates not publis.h.i.+ng the gene sequences of pathogens on the Internet, which I also agree with. He would like to see scientists adopt regulations along these lines voluntarily and internationally, and he points out that "if we wait until after a catastrophe, we may end up with more severe and damaging regulations." He says he hopes that "we will do such regulation lightly, so that we can get most of the benefits."

Others, such as Bill McKibben, the environmentalist who was one of the first to warn against global warming, have advocated relinquishment of broad areas such as biotechnology and nanotechnology, or even of all technology. As I discuss in greater detail below (see p. 410), relinquis.h.i.+ng broad fields would be impossible to achieve without essentially relinquis.h.i.+ng all technical development. That in turn would require a Brave New World Brave New World style of totalitarian government, banning all technology development. Not only would such a solution be inconsistent with our democratic values, but it would actually make the dangers worse by driving the technology underground, where only the least responsible pract.i.tioners (for example, rogue states) would have most of the expertise. style of totalitarian government, banning all technology development. Not only would such a solution be inconsistent with our democratic values, but it would actually make the dangers worse by driving the technology underground, where only the least responsible pract.i.tioners (for example, rogue states) would have most of the expertise.

Intertwined Benefits . . .

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way.-CHARLES d.i.c.kENS, A TALE OF TWO CITIES A TALE OF TWO CITIES It's like arguing in favor of the plough. You know some people are going to argue against it, but you also know it's going to exist.-JAMES HUGHES, SECRETARY OF THE TRANSHUMANIST a.s.sOCIATION AND SOCIOLOGIST AT TRINITY COLLEGE, IN A DEBATE, "SHOULD HUMANS WELCOME OR RESIST BECOMING POSTHUMAN?"

Technology has always been a mixed blessing, bringing us benefits such as longer and healthier lifespans, freedom from physical and mental drudgery, and many novel creative possibilities on the one hand, while introducing new dangers. Technology empowers both our creative and destructive natures.

Substantial portions of our species have already experienced alleviation of the poverty, disease, hard labor, and misfortune that have characterized much of human history. Many of us now have the opportunity to gain satisfaction and meaning from our work, rather than merely toiling to survive. We have ever more powerful tools to express ourselves. With the Web now reaching deeply into less developed regions of the world, we will see major strides in the availability of high-quality education and medical knowledge. We can share culture, art, and humankind's exponentially expanding knowledge base worldwide. I mentioned the World Bank's report on the worldwide reduction in poverty in chapter 2 and discuss that further in the next chapter.

We've gone from about twenty democracies in the world after World War II to more than one hundred today largely through the influence of decentralized electronic communication. The biggest wave of democratization, including the fall of the Iron Curtain, occurred during the 1990s with the growth of the Internet and related technologies. There is, of course, a great deal more to accomplish in each of these areas.

Bioengineering is in the early stages of making enormous strides in reversing disease and aging processes. Ubiquitous N and R are two to three decades away and will continue an exponential expansion of these benefits. As I reviewed in earlier chapters, these technologies will create extraordinary wealth, thereby overcoming poverty and enabling us to provide for all of our material needs by transforming inexpensive raw materials and information into any type of product.

We will spend increasing portions of our time in virtual environments and will be able to have any type of desired experience with anyone, real or simulated, in virtual reality. Nanotechnology will bring a similar ability to morph the physical world to our needs and desires. Lingering problems from our waning industrial age will be overcome. We will be able to reverse remaining environmental destruction. Nanoengineered fuel cells and solar cells will provide clean energy. Nan.o.bots in our physical bodies will destroy pathogens, remove debris such as misformed proteins and proto fibrils, repair DNA, and reverse aging. We will be able to redesign all of the systems in our bodies and brains to be far more capable and durable.

Most significant will be the merger of biological and nonbiological intelligence, although nonbiological intelligence will quickly come to predominate. There will be a vast expansion of the concept of what it means to be human. We will greatly enhance our ability to create and appreciate all forms of knowledge from science to the arts, while extending our ability to relate to our environment and one another.

On the other hand ...

. . . and Dangers

"Plants" with "leaves" no more efficient than today's solar cells could outcompete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous "bacteria" could out-compete real bacteria: They could spread like blowing pollen, replicated swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop-at least if we make no preparation. We have trouble enough controlling viruses and fruit flies.-ERIC DREXLER

As well as its many remarkable accomplishments, the twentieth century saw technology's awesome ability to amplify our destructive nature, from Stalin's tanks to Hitler's trains. The tragic event of September 11, 2001, is another example of technologies (jets and buildings) taken over by people with agendas of destruction. We still live today with a sufficient number of nuclear weapons (not all of which are accounted for) to end all mammalian life on the planet.

Since the 1980s the means and knowledge have existed in a routine college bioengineering lab to create unfriendly pathogens potentially more dangerous than nuclear weapons." In a war-game simulation conducted at Johns Hopkins University called "Dark Winter," it was estimated that an intentional introduction of conventional smallpox in three U.S. cities could result in one million deaths. If the virus were bioengineered to defeat the existing smallpox vaccine, the results could be far worse.'? The reality of this specter was made clear by a 2001 experiment in Australia in which the mousepox virus was inadvertently modified with genes that altered the immune-system response. The mousepox vaccine was powerless to stop this altered virus.11 These dangers resonate in our historical memories. Bubonic plague killed one third of the European population. More recently the 1918 flu killed twenty million people worldwide. These dangers resonate in our historical memories. Bubonic plague killed one third of the European population. More recently the 1918 flu killed twenty million people worldwide.12 Will such threats prevent the ongoing acceleration of the power, efficiency, and intelligence of complex systems (such as humans and our technology)? The past record of complexity increase on this planet has shown a smooth acceleration, even through a long history of catastrophes, both internally generated and externally imposed. This is true of both biological evolution (which faced calamities such as encounters with large asteroids and meteors) and human history (which has been punctuated by an ongoing series of major wars).

However, I believe we can take some encouragement from the effectiveness of the world's response to the SARS(severe acute respiratory syndrome) virus. Although the possibility of an even more virulent return of SARS remains uncertain as of the writing of this book, it appears that containment measures have been relatively successful and have prevented this tragic outbreak from becoming a true catastrophe. Part of the response involved ancient, low-tech tools such as quarantine and face masks.

However, this approach would not have worked without advanced tools that have only recently become available. Researchers were able to sequence the DNA of the SARS virus within thirty-one days of the outbreak-compared to fifteen years for HIV. That enabled the rapid development of an effective test so that carriers could quickly be identified. Moreover, instantaneous global communication facilitated a coordinated response worldwide, a feat not possible when viruses ravaged the world in ancient times.

As technology accelerates toward the full realization of GNR, we will see the same intertwined potentials: a feast of creativity resulting from human intelligence expanded manyfold, combined with many grave new dangers. A quintessential concern that has received considerable attention is unrestrained nan.o.bot replication. Nan.o.bot technology requires trillions of such intelligently designed devices to be useful. To scale up to such levels it will be necessary to enable them to self-replicate, essentially the same approach used in the biological world (that's how one fertilized egg cell becomes the trillions of cells in a human). And in the same way that biological self-replication gone awry (that is, cancer) results in biological destruction, a defect in the mechanism curtailing nan.o.bot self-replication-the so-called gray-goo scenario-would endanger all physical ent.i.ties, biological or otherwise.

Living creatures-including humans-would be the primary victims of an exponentially spreading nan.o.bot attack. The princ.i.p.al designs for nan.o.bot construction use carbon as a primary building block. Because of carbon's unique ability to form four-way bonds, it is an ideal building block for molecular a.s.semblies. Carbon molecules can form straight chains, zigzags, rings, nanotubes (hexagonal arrays formed in tubes), sheets, buckyb.a.l.l.s (arrays of hexagons and pentagons formed into spheres), and a variety of other shapes. Because biology has made the same use of carbon, pathological nan.o.bots would find the Earth's bioma.s.s an ideal source of this primary ingredient. Biological ent.i.ties can also provide stored energy in the form of glucose and ATP.13 Useful trace elements such as oxygen, sulfur, iron, calcium, and others are also available in the bioma.s.s. Useful trace elements such as oxygen, sulfur, iron, calcium, and others are also available in the bioma.s.s.

How long would it take an out-of-control replicating nan.o.bot to destroy the Earth's bioma.s.s? The bioma.s.s has on the order of 1045 carbon atoms. carbon atoms.14 A reasonable estimate of the number of carbon atoms in a single replicating nan.o.bot is about 10 A reasonable estimate of the number of carbon atoms in a single replicating nan.o.bot is about 106. (Note that this a.n.a.lysis is not very sensitive to the accuracy of these figures, only to the approximate order of magnitude.) This malevolent nan.o.bot would need to create on the order of 1039 copies of itself to replace the bioma.s.s, which could be accomplished with 130 replications (each of which would potentially double the destroyed bioma.s.s). Rob Freitas has estimated a minimum replication time of approximately one hundred seconds, so 130 replication cycles would require about three and a half hours. copies of itself to replace the bioma.s.s, which could be accomplished with 130 replications (each of which would potentially double the destroyed bioma.s.s). Rob Freitas has estimated a minimum replication time of approximately one hundred seconds, so 130 replication cycles would require about three and a half hours.15 However, the actual rate of destruction would be slower because bioma.s.s is not "efficiently" laid out. The limiting factor would be the actual movement of the front of destruction. Nan.o.bots cannot travel very quickly because of their small size. It's likely to take weeks for such a destructive process to circle the globe. However, the actual rate of destruction would be slower because bioma.s.s is not "efficiently" laid out. The limiting factor would be the actual movement of the front of destruction. Nan.o.bots cannot travel very quickly because of their small size. It's likely to take weeks for such a destructive process to circle the globe.

Based on this observation we can envision a more insidious possibility. In a two-phased attack, the nan.o.bots take several weeks to spread throughout the bioma.s.s but use up an insignificant portion of the carbon atoms, say one out of every thousand trillion (1015). At this extremely low level of concentration the nan.o.bots would be as stealthy as possible. Then, at an "optimal" point, the second phase would begin with the seed nan.o.bots expanding rapidly in place to destroy the bioma.s.s. For each seed nan.o.bot to multiply itself a thousand trillionfold would require only about fifty binary replications, or about ninety minutes. With the nan.o.bots having already spread out in position throughout the bioma.s.s, movement of the destructive wave front would no longer be a limiting factor.

The point is that without defenses, the available bioma.s.s could be destroyed by gray goo very rapidly. As I discuss below (see p. 417), we will clearly need a nanotechnology immune system in place before these scenarios become a possibility. This immune system would have to be capable of contending not just with obvious destruction but with any potentially dangerous (stealthy) replication, even at very low concentrations.

Mike Treder and Chris Phoenix-executive director and director of research of the Center for Responsible Nanotechnology, respectively-Eric Drexler, Robert Freitas, Ralph Merkle, and others have pointed out that future MNT manufacturing devices can be created with safeguards that would prevent the creation of self-replicating nanodevices.16 I discuss some of these strategies below. However, this observation, although important, does not eliminate the specter of gray goo. There are other reasons (beyond manufacturing) that self-replicating nan.o.bots will need to be created. The nanotechnology immune system mentioned above, for example, will ultimately require self-replication; otherwise it would be unable to defend us. Self-replication will also be necessary for nan.o.bots to rapidly expand intelligence beyond the Earth, as I discussed in chapter 6. It is also likely to find extensive military applications. Moreover, safeguards against unwanted self-replication, such as the broadcast architecture described below (see p. 412), can be defeated by a determined adversary or terrorist. I discuss some of these strategies below. However, this observation, although important, does not eliminate the specter of gray goo. There are other reasons (beyond manufacturing) that self-replicating nan.o.bots will need to be created. The nanotechnology immune system mentioned above, for example, will ultimately require self-replication; otherwise it would be unable to defend us. Self-replication will also be necessary for nan.o.bots to rapidly expand intelligence beyond the Earth, as I discussed in chapter 6. It is also likely to find extensive military applications. Moreover, safeguards against unwanted self-replication, such as the broadcast architecture described below (see p. 412), can be defeated by a determined adversary or terrorist.

Freitas has identified a number of other disastrous nan.o.bot scenarios.17 In what he calls the "gray plankton" scenario, malicious nan.o.bots would use underwater carbon stored as CH In what he calls the "gray plankton" scenario, malicious nan.o.bots would use underwater carbon stored as CH4 (methane) as well as CO (methane) as well as CO2 dissolved in seawater. These ocean-based sources can provide about ten times as much carbon as Earth's bioma.s.s. In his "gray dust" scenario, replicating nan.o.bots use basic elements available in airborne dust and sunlight for power. The "gray lichens" scenario involves using carbon and other elements on rocks. dissolved in seawater. These ocean-based sources can provide about ten times as much carbon as Earth's bioma.s.s. In his "gray dust" scenario, replicating nan.o.bots use basic elements available in airborne dust and sunlight for power. The "gray lichens" scenario involves using carbon and other elements on rocks.

A Panoply of Existential Risks

If a little knowledge is dangerous, where is a person who has so much as to be out of danger?-THOMAS HENRY

I discuss below (see the section "A Program for GNR Defense," p. 422) steps we can take to address these grave risks, but we cannot have complete a.s.surance in any strategy that we devise today. These risks are what Nick Bostrom calls "existential risks," which he defines as the dangers in the upper-right quadrant of the following table:18 Biological life on Earth encountered a human-made existential risk for the first time in the middle of the twentieth century with the advent of the hydrogen bomb and the subsequent cold-war buildup of thermonuclear forces. President Kennedy reportedly estimated that the likelihood of an all-out nuclear war during the Cuban missile crisis was between 33 and 50 percent.19 The legendary information theorist John von Neumann, who became the chairman of the Air Force Strategic Missiles Evaluation Committee and a government adviser on nuclear strategies, estimated the likelihood of nuclear Armageddon (prior to the Cuban missile crisis) at close to 100 percent. The legendary information theorist John von Neumann, who became the chairman of the Air Force Strategic Missiles Evaluation Committee and a government adviser on nuclear strategies, estimated the likelihood of nuclear Armageddon (prior to the Cuban missile crisis) at close to 100 percent.20 Given the perspective of the 1960s what informed observer of those times would have predicted that the world would have gone through the next forty years without another nontest nuclear explosion? Given the perspective of the 1960s what informed observer of those times would have predicted that the world would have gone through the next forty years without another nontest nuclear explosion?

Despite the apparent chaos of international affairs we can be grateful for the successful avoidance thus far of the employment of nuclear weapons in war. But we clearly cannot rest easily, since enough hydrogen bombs still exist to destroy all human life many times over.21 Although attracting relatively little public discussion, the ma.s.sive opposing ICBM a.r.s.enals of the United States and Russia remain in place, despite the apparent thawing of relations. Although attracting relatively little public discussion, the ma.s.sive opposing ICBM a.r.s.enals of the United States and Russia remain in place, despite the apparent thawing of relations.

Nuclear proliferation and the widespread availability of nuclear materials and know-how is another grave concern, although not an existential one for our civilization. (That is, only an all-out thermonuclear war involving the ICBM a.r.s.enals poses a risk to survival of all humans.) Nuclear proliferation and nuclear terrorism belong to the "profound-local" category of risk, along with genocide. However, the concern is certainly severe because the logic of mutual a.s.sured destruction does not work in the context of suicide terrorists.

Debatably we've now added another existential risk, which is the possibility of a bioengineered virus that spreads easily, has a long incubation period, and delivers an ultimately deadly payload. Some viruses are easily communicable, such as the flu and common cold. Others are deadly, such as HIV. It is rare for a virus to combine both attributes. Humans living today are descendants of those who developed natural immunities to most of the highly communicable viruses. The ability of the species to survive viral outbreaks is one advantage of s.e.xual reproduction, which tends to ensure genetic diversity in the population, so that the response to specific viral agents is highly variable. Although catastrophic, bubonic plague did not kill everyone in Europe. Other viruses, such as smallpox, have both negative characteristics-they are easily contagious and deadly-but have been around long enough that there has been time for society to create a technological protection in the form of a vaccine. Gene engineering, however, has the potential to bypa.s.s these evolutionary protections by suddenly introducing new pathogens for which we have no protection, natural or technological.

The prospect of adding genes for deadly toxins to easily transmitted, common viruses such as the common cold and flu introduced another possible existential-risk scenario. It was this prospect that led to the Asilomar conference to consider how to deal with such a threat and the subsequent drafting of a set of safety and ethics guidelines. Although these guidelines have worked thus far, the underlying technologies for genetic manipulation are growing rapidly in sophistication.

In 2003 the world struggled, successfully, with the SARS virus. The emergence of SARS resulted from a combination of an ancient practice (the virus is suspected of having jumped from exotic animals, possibly civet cats, to humans living in close proximity) and a modern practice (the infection spread rapidly across the world by air travel). SARS provided us with a dry run of a virus new to human civilization that combined easy transmission, the ability to survive for extended periods of time outside the human body, and a high degree of mortality, with death rates estimated at 14 to 20 percent. Again, the response combined ancient and modern techniques.

Our experience with SARS shows that most viruses, even if relatively easily transmitted and reasonably deadly, represent grave but not necessarily existential risks. SARS, however, does not appear to have been engineered. SARS spreads easily through externally transmitted bodily fluids but is not easily spread through airborne particles. Its incubation period is estimated to range from one day to two weeks, whereas a longer incubation period would allow a THE virus to spread through several exponentially growing generations before carriers are identified.22 SARS is deadly, but the majority of its victims do survive. It continues to be feasible for a virus to be malevolently engineered so it spreads more easily than SARS, has an extended incubation period, and is deadly to essentially all victims. Smallpox is close to having these characteristics. Although we have a vaccine (albeit a crude one), the vaccine would not be effective against genetically modified versions of the virus.

As I describe below, the window of malicious opportunity for bioengineered viruses, existential or otherwise, will close in the 2020s when we have fully effective antiviral technologies based on nan.o.bots.23 However, because nanotechnology will be thousands of times stronger, faster, and more intelligent than biological ent.i.ties, self-replicating nan.o.bots will present a greater risk and yet another existential risk. The window for malevolent nan.o.bots will ultimately be closed by strong artificial intelligence, but, not surprisingly, "unfriendly" AI will itself present an even more compelling existential risk, which I discuss below (see p. 420). However, because nanotechnology will be thousands of times stronger, faster, and more intelligent than biological ent.i.ties, self-replicating nan.o.bots will present a greater risk and yet another existential risk. The window for malevolent nan.o.bots will ultimately be closed by strong artificial intelligence, but, not surprisingly, "unfriendly" AI will itself present an even more compelling existential risk, which I discuss below (see p. 420).

The Precautionary Principle. As Bostrom, Freitas, and other observers including myself have pointed out, we cannot rely on trial-and-error approaches to deal with existential risks. There are competing interpretations of what has become known as the "precautionary principle." (If the consequences of an action are unknown but judged by some scientists to have even a small risk of being profoundly negative, it's better to not carry out the action than risk negative consequences.) But it's clear that we need to achieve the highest possible level of confidence in our strategies to combat such risks. This is one reason we're hearing increasingly strident voices demanding that we shut down the advance of technology, as a primary strategy to eliminate new existential risks before they occur. Relinquishment, however, is not the appropriate response and will only interfere with the profound benefits of these emerging technologies while actually increasing the likelihood of a disastrous outcome. Max More articulates the limitations of the precautionary principle and advocates replacing it with what he calls the "proactionary principle," which involves balancing the risks of action and inaction. As Bostrom, Freitas, and other observers including myself have pointed out, we cannot rely on trial-and-error approaches to deal with existential risks. There are competing interpretations of what has become known as the "precautionary principle." (If the consequences of an action are unknown but judged by some scientists to have even a small risk of being profoundly negative, it's better to not carry out the action than risk negative consequences.) But it's clear that we need to achieve the highest possible level of confidence in our strategies to combat such risks. This is one reason we're hearing increasingly strident voices demanding that we shut down the advance of technology, as a primary strategy to eliminate new existential risks before they occur. Relinquishment, however, is not the appropriate response and will only interfere with the profound benefits of these emerging technologies while actually increasing the likelihood of a disastrous outcome. Max More articulates the limitations of the precautionary principle and advocates replacing it with what he calls the "proactionary principle," which involves balancing the risks of action and inaction.24 Before discussing how to respond to the new challenge of existential risks, it's worth reviewing a few more that have been postulated by Bostrom and others.

Please click Like and leave more comments to support and keep us alive.

RECENTLY UPDATED MANGA

The Singularity Is Near_ When Humans Transcend Biology Part 21 summary

You're reading The Singularity Is Near_ When Humans Transcend Biology. This manga has been translated by Updating. Author(s): Ray Kurzweil. Already has 476 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

BestLightNovel.com is a most smartest website for reading manga online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to BestLightNovel.com