BestLightNovel.com

How To Create A Mind Part 10

How To Create A Mind - BestLightNovel.com

You’re reading novel How To Create A Mind Part 10 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy

The highest bandwidth (speed) of the Internet backbone.7

Being an engineer, about three decades ago I started to gather data on measures of technology in different areas. When I began this effort, I did not expect that it would present a clear picture, but I did hope that it would provide some guidance and enable me to make educated guesses. My goal was-and still is-to time my own technology efforts so that they will be appropriate for the world that exists when I complete a project-which I realized would be very different from the world that existed when I started.

Consider how much and how quickly the world has changed only recently. Just a few years ago, people did not use social networks (Facebook, for example, was founded in 2004 and had 901 million monthly active users at the end of March 2012),8 wikis, blogs, or tweets. In the 1990s most people did not use search engines or cell phones. Imagine the world without them. That seems like ancient history but was not so long ago. The world will change even more dramatically in the near future. wikis, blogs, or tweets. In the 1990s most people did not use search engines or cell phones. Imagine the world without them. That seems like ancient history but was not so long ago. The world will change even more dramatically in the near future.

In the course of my investigation, I made a startling discovery: If a technology is an information technology, the basic measures of price/performance and capacity (per unit of time or cost, or other resource) follow amazingly precise exponential trajectories.

These trajectories outrun the specific paradigms they are based on (such as Moore's law). But when one paradigm runs out of steam (for example, when engineers were no longer able to reduce the size and cost of vacuum tubes in the 1950s), it creates research pressure to create the next paradigm, and so another S-curve of progress begins.



The exponential portion of that next S-curve for the new paradigm then continues the ongoing exponential of the information technology measure. Thus vacuum tubebased computing in the 1950s gave way to transistors in the 1960s, and then to integrated circuits and Moore's law in the late 1960s, and beyond. Moore's law, in turn, will give way to three-dimensional computing, the early examples of which are already in place. The reason why information technologies are able to consistently transcend the limitations of any particular paradigm is that the resources required to compute or remember or transmit a bit of information are vanis.h.i.+ngly small.

We might wonder, are there fundamental limits to our ability to compute and transmit information, regardless of paradigm? The answer is yes, based on our current understanding of the physics of computation. Those limits, however, are not very limiting. Ultimately we can expand our intelligence trillions-fold based on molecular computing. By my calculations, we will reach these limits late in this century.

It is important to point out that not every exponential phenomenon is an example of the law of accelerating returns. Some observers misconstrue the LOAR by citing exponential trends that are not information-based: For example, they point out, men's shavers have gone from one blade to two to four, and then ask, where are the eight-blade shavers? Shavers are not (yet) an information technology.

In The Singularity Is Near The Singularity Is Near, I provide a theoretical examination, including (in the appendix to that book) a mathematical treatment of why the LOAR is so remarkably predictable. Essentially, we always use the latest technology to create the next. Technologies build on themselves in an exponential manner, and this phenomenon is readily measurable if it involves an information technology. In 1990 we used the computers and other tools of that era to create the computers of 1991; in 2012 we are using current information tools to create the machines of 2013 and 2014. More broadly speaking, this acceleration and exponential growth applies to any process in which patterns of information evolve. So we see acceleration in the pace of biological evolution, and similar (but much faster) acceleration in technological evolution, which is itself an outgrowth of biological evolution.

I now have a public track record of more than a quarter of a century of predictions based on the law of accelerating returns, starting with those presented in The Age of Intelligent Machines The Age of Intelligent Machines, which I wrote in the mid-1980s. Examples of accurate predictions from that book include: the emergence in the mid- to late 1990s of a vast worldwide web of communications tying together people around the world to one another and to all human knowledge; a great wave of democratization emerging from this decentralized communication network, sweeping away the Soviet Union; the defeat of the world chess champion by 1998; and many others.

I described the law of accelerating returns, as it is applied to computation, extensively in The Age of Spiritual Machines The Age of Spiritual Machines, where I provided a century of data showing the doubly exponential progression of the price/performance of computation through 1998. It is updated through 2009 below.

I recently wrote a 146-page review of the predictions I made in The Age of Intelligent Machines, The Age of Spiritual Machines The Age of Intelligent Machines, The Age of Spiritual Machines, and The Singularity Is Near The Singularity Is Near. (You can read the essay here by going to the link in this endnote.)9The Age of Spiritual Machines included hundreds of predictions for specific decades (2009, 2019, 2029, and 2099). For example, I made 147 predictions for 2009 in included hundreds of predictions for specific decades (2009, 2019, 2029, and 2099). For example, I made 147 predictions for 2009 in The Age of Spiritual Machines The Age of Spiritual Machines, which I wrote in the 1990s. Of these, 115 (78 percent) are entirely correct as of the end of 2009; the predictions that were concerned with basic measurements of the capacity and price/performance of information technologies were particularly accurate. Another 12 (8 percent) are "essentially correct." A total of 127 predictions (86 percent) are correct or essentially correct. (Since the predictions were made specific to a given decade, a prediction for 2009 was considered "essentially correct" if it came true in 2010 or 2011.) Another 17 (12 percent) are partially correct, and 3 (2 percent) are wrong.

Calculations per second per (constant) thousand dollars of different computing devices.10

Floating-point operations per second of different supercomputers.11

Transistors per chip for different Intel processors.12

Bits per dollar for dynamic random access memory chips.13

Bits per dollar for random access memory chips.14

The average price per transistor in dollars.15

The total number of bits of random access memory s.h.i.+pped each year.16

Bits per dollar (in constant 2000 dollars) for magnetic data storage.17

Even the predictions that were "wrong" were not all wrong. For example, I judged my prediction that we would have self-driving cars to be wrong, even though Google has demonstrated self-driving cars, and even though in October 2010 four driverless electric vans successfully concluded a 13,000-kilometer test drive from Italy to China.18 Experts in the field currently predict that these technologies will be routinely available to consumers by the end of this decade. Experts in the field currently predict that these technologies will be routinely available to consumers by the end of this decade.

Exponentially expanding computational and communication technologies all contribute to the project to understand and re-create the methods of the human brain. This effort is not a single organized project but rather the result of a great many diverse projects, including detailed modeling of const.i.tuents of the brain ranging from individual neurons to the entire neocortex, the mapping of the "connectome" (the neural connections in the brain), simulations of brain regions, and many others. All of these have been scaling up exponentially. Much of the evidence presented in this book has only become available recently-for example, the 2012 Wedeen study discussed in chapter 4 chapter 4 that showed the very orderly and "simple" (to quote the researchers) gridlike pattern of the connections in the neocortex. The researchers in that study acknowledge that their insight (and images) only became feasible as the result of new high-resolution imaging technology. that showed the very orderly and "simple" (to quote the researchers) gridlike pattern of the connections in the neocortex. The researchers in that study acknowledge that their insight (and images) only became feasible as the result of new high-resolution imaging technology.

Brain scanning technologies are improving in resolution, spatial and temporal, at an exponential rate. Different types of brain scanning methods being pursued range from completely noninvasive methods that can be used with humans to more invasive or destructive methods on animals.

MRI (magnetic resonance imaging), a noninvasive imaging technique with relatively high temporal resolution, has steadily improved at an exponential pace, to the point that spatial resolutions are now close to 100 microns (millionths of a meter).

A Venn diagram of brain imaging methods.19

Tools for imaging the brain.20

MRI spatial resolution in microns.21

Spatial resolution of destructive imaging techniques.22

Spatial resolution of nondestructive imaging techniques in animals.23 Destructive imaging, which is performed to collect the connectome (map of all interneuronal connections) in animal brains, has also improved at an exponential pace. Current maximum resolution is around four nanometers, which is sufficient to see individual connections.

Artificial intelligence technologies such as natural-language-understanding systems are not necessarily designed to emulate theorized principles of brain function, but rather for maximum effectiveness. Given this, it is notable that the techniques that have won out are consistent with the principles I have outlined in this book: self-organizing, hierarchical recognizers of invariant self-a.s.sociative patterns with redundancy and up-and-down predictions. These systems are also scaling up exponentially, as Watson has demonstrated.

A primary purpose of understanding the brain is to expand our toolkit of techniques to create intelligent systems. Although many AI researchers may not fully appreciate this, they have already been deeply influenced by our knowledge of the principles of the operation of the brain. Understanding the brain also helps us to reverse brain dysfunctions of various kinds. There is, of course, another key goal of the project to reverse-engineer the brain: understanding who we are.

CHAPTER 11

OBJECTIONS

If a machine can prove indistinguishable from a human, we should award it the respect we would to a human-we should accept that it has a mind.-Stevan Harnad

The most significant source of objection to my thesis on the law of accelerating returns and its application to the amplification of human intelligence stems from the linear nature of human intuition. As I described earlier, each of the several hundred million pattern recognizers in the neocortex processes information sequentially. One of the implications of this organization is that we have linear expectations about the future, so critics apply their linear intuition to information phenomena that are fundamentally exponential. he most significant source of objection to my thesis on the law of accelerating returns and its application to the amplification of human intelligence stems from the linear nature of human intuition. As I described earlier, each of the several hundred million pattern recognizers in the neocortex processes information sequentially. One of the implications of this organization is that we have linear expectations about the future, so critics apply their linear intuition to information phenomena that are fundamentally exponential.

I call objections along these lines "criticism from incredulity," in that exponential projections seem incredible given our linear predilection, and they take a variety of forms. Microsoft cofounder Paul Allen (born in 1953) and his colleague Mark Greaves recently articulated several of them in an essay t.i.tled "The Singularity Isn't Near" published in Technology Review Technology Review magazine. magazine.1 While my response here is to Allen's particular critiques, they represent a typical range of objections to the arguments I've made, especially with regard to the brain. Although Allen references While my response here is to Allen's particular critiques, they represent a typical range of objections to the arguments I've made, especially with regard to the brain. Although Allen references The Singularity Is Near The Singularity Is Near in the t.i.tle of his essay, his only citation in the piece is to an essay I wrote in 2001 ("The Law of Accelerating Returns"). Moreover, his article does not acknowledge or respond to arguments I actually make in the book. Unfortunately, I find this often to be the case with critics of my work. in the t.i.tle of his essay, his only citation in the piece is to an essay I wrote in 2001 ("The Law of Accelerating Returns"). Moreover, his article does not acknowledge or respond to arguments I actually make in the book. Unfortunately, I find this often to be the case with critics of my work.

When The Age of Spiritual Machines The Age of Spiritual Machines was published in 1999, augmented later by the 2001 essay, it generated several lines of criticism, such as: was published in 1999, augmented later by the 2001 essay, it generated several lines of criticism, such as: Moore's law will come to an end; hardware capability may be expanding exponentially but software is stuck in the mud; the brain is too complicated; there are capabilities in the brain that inherently cannot be replicated in software; Moore's law will come to an end; hardware capability may be expanding exponentially but software is stuck in the mud; the brain is too complicated; there are capabilities in the brain that inherently cannot be replicated in software; and several others. One of the reasons I wrote and several others. One of the reasons I wrote The Singularity Is Near The Singularity Is Near was to respond to those critiques. was to respond to those critiques.

I cannot say that Allen and similar critics would necessarily have been convinced by the arguments I made in that book, but at least he and others could have responded to what I actually wrote. Allen argues that "the Law of Accelerating Returns (LOAR)...is not a physical law." I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a lower level. A cla.s.sic example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, it models each particle as following a random walk, so by definition we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are quite predictable to a high degree of precision, according to the laws laws of thermodynamics. So it is with the law of accelerating returns: Each technology project and contributor is unpredictable, yet the overall trajectory, as quantified by basic measures of price/performance and capacity, nonetheless follows a remarkably predictable path. of thermodynamics. So it is with the law of accelerating returns: Each technology project and contributor is unpredictable, yet the overall trajectory, as quantified by basic measures of price/performance and capacity, nonetheless follows a remarkably predictable path.

If computer technology were being pursued by only a handful of researchers, it would indeed be unpredictable. But it's the product of a sufficiently dynamic system of compet.i.tive projects that a basic measure of its price/performance, such as calculations per second per constant dollar, follows a very smooth exponential path, dating back to the 1890 American census as I noted in the previous chapter previous chapter. While the theoretical basis for the LOAR is presented extensively in The Singularity Is Near The Singularity Is Near, the strongest case for it is made by the extensive empirical evidence that I and others present.

Allen writes that "these 'laws' work until they don't." Here he is confusing paradigms with the ongoing trajectory of a basic area of information technology. If we were examining, for example, the trend of creating ever smaller vacuum tubes-the paradigm for improving computation in the 1950s-it's true that it continued until it didn't. But as the end of this particular paradigm became clear, research pressure grew for the next paradigm. The technology of transistors kept the underlying trend of the exponential growth of price/performance of computation going, and that led to the fifth paradigm (Moore's law) and the continual compression of features on integrated circuits. There have been regular predictions that Moore's law will come to an end. The semiconductor industry's "International Technology Roadmap for Semiconductors" projects seven-nanometer features by the early 2020s.2 At that point key features will be the width of thirty-five carbon atoms, and it will be difficult to continue shrinking them any farther. However, Intel and other chip makers are already taking the first steps toward the sixth paradigm, computing in three dimensions, to continue exponential improvement in price/performance. Intel projects that three-dimensional chips will be mainstream by the teen years; three-dimensional transistors and 3-D memory chips have already been introduced. This sixth paradigm will keep the LOAR going with regard to computer price/performance to a time later in this century when a thousand dollars' worth of computation will be trillions of times more powerful than the human brain. At that point key features will be the width of thirty-five carbon atoms, and it will be difficult to continue shrinking them any farther. However, Intel and other chip makers are already taking the first steps toward the sixth paradigm, computing in three dimensions, to continue exponential improvement in price/performance. Intel projects that three-dimensional chips will be mainstream by the teen years; three-dimensional transistors and 3-D memory chips have already been introduced. This sixth paradigm will keep the LOAR going with regard to computer price/performance to a time later in this century when a thousand dollars' worth of computation will be trillions of times more powerful than the human brain.3 (It appears that Allen and I are at least in agreement on what level of computation is required to functionally simulate the human brain.) (It appears that Allen and I are at least in agreement on what level of computation is required to functionally simulate the human brain.)4 Allen then goes on to give the standard argument that software is not progressing in the same exponential manner as hardware. In The Singularity Is Near The Singularity Is Near I addressed this issue at length, citing different methods of measuring complexity and capability in software that do demonstrate a similar exponential growth. I addressed this issue at length, citing different methods of measuring complexity and capability in software that do demonstrate a similar exponential growth.5 One recent study ("Report to the President and Congress, Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology," by the President's Council of Advisors on Science and Technology) states the following: One recent study ("Report to the President and Congress, Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology," by the President's Council of Advisors on Science and Technology) states the following: Even more remarkable-and even less widely understood-is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed. The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade.... Here is just one example, provided by Professor Martin Grotschel of Konrad-Zuse-Zentrum fur Informationstechnik Berlin. Grotschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later-in 2003-this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grotschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008. The design and a.n.a.lysis of algorithms, and the study of the inherent computational complexity of problems, are fundamental subfields of computer science.

Note that the linear programming that Grotschel cites above as having benefited from an improvement in performance of 43 million to 1 is the mathematical technique that is used to optimally a.s.sign resources in a hierarchical memory system such as HHMM that I discussed earlier. I cite many other similar examples like this in The Singularity Is Near The Singularity Is Near.6 Regarding AI, Allen is quick to dismiss IBM's Watson, an opinion shared by many other critics. Many of these detractors don't know anything about Watson other than the fact that it is software running on a computer (albeit a parallel one with 720 processor cores). Allen writes that systems such as Watson "remain brittle, their performance boundaries are rigidly set by their internal a.s.sumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical answers outside of their specific areas."

First of all, we could make a similar observation about humans. I would also point out that Watson's "specific areas" include all all of Wikipedia plus many other knowledge bases, which hardly const.i.tute a narrow focus. Watson deals with a vast range of human knowledge and is capable of dealing with subtle forms of language, including puns, similes, and metaphors in virtually all fields of human endeavor. It's not perfect, but neither are humans, and it was good enough to be victorious on of Wikipedia plus many other knowledge bases, which hardly const.i.tute a narrow focus. Watson deals with a vast range of human knowledge and is capable of dealing with subtle forms of language, including puns, similes, and metaphors in virtually all fields of human endeavor. It's not perfect, but neither are humans, and it was good enough to be victorious on Jeopardy! Jeopardy! over the best human players. over the best human players.

Allen argues that Watson was a.s.sembled by the scientists themselves, building each link of narrow knowledge in specific areas. This is simply not true. Although a few areas of Watson's data were programmed directly, Watson acquired the significant majority of its knowledge on its own by reading natural-language doc.u.ments such as Wikipedia. That represents its key strength, as does its ability to understand the convoluted language in Jeopardy! Jeopardy! queries (answers in search of a question). queries (answers in search of a question).

As I mentioned earlier, much of the criticism of Watson is that it works through statistical probabilities rather than "true" understanding. Many readers interpret this to mean that Watson is merely gathering statistics on word sequences. The term "statistical information" in the case of Watson actually refers to distributed coefficients and symbolic connections in self-organizing methods such as hierarchical hidden Markov models. One could just as easily dismiss the distributed neurotransmitter concentrations and redundant connection patterns in the human cortex as "statistical information." Indeed we resolve ambiguities in much the same way that Watson does-by considering the likelihood of different interpretations of a phrase.

Allen continues, "Every structure [in the brain] has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain every individual structure and neural circuit has been individually refined by evolution and environmental factors."

This contention that every structure and neural circuit in the brain is unique and there by design is simply impossible, for it would mean that the blueprint of the brain would require hundreds of trillions of bytes of information. The brain's structural plan (like that of the rest of the body) is contained in the genome, and the brain itself cannot contain more design information than the genome. Note that epigenetic information (such as the peptides controlling gene expression) does not appreciably add to the amount of information in the genome. Experience and learning do add significantly to the amount of information contained in the brain, but the same can be said of AI systems like Watson. I show in The Singularity Is Near The Singularity Is Near that, after lossless compression (due to ma.s.sive redundancy in the genome), the amount of design information in the genome is about 50 million bytes, roughly half of which (that is, about 25 million bytes) pertains to the brain. that, after lossless compression (due to ma.s.sive redundancy in the genome), the amount of design information in the genome is about 50 million bytes, roughly half of which (that is, about 25 million bytes) pertains to the brain.7 That's not simple, but it is a level of complexity we can deal with and represents less complexity than many software systems in the modern world. Moreover much of the brain's 25 million bytes of genetic design information pertain to the biological requirements of neurons, not to their information-processing algorithms. That's not simple, but it is a level of complexity we can deal with and represents less complexity than many software systems in the modern world. Moreover much of the brain's 25 million bytes of genetic design information pertain to the biological requirements of neurons, not to their information-processing algorithms.

How do we arrive at on the order of 100 to 1,000 trillion connections in the brain from only tens of millions of bytes of design information? Obviously, the answer is through ma.s.sive redundancy. Dharmendra Modha, manager of Cognitive Computing for IBM Research, writes that "neuroanatomists have not found a hopelessly tangled, arbitrarily connected network, completely idiosyncratic to the brain of each individual, but instead a great deal of repeating structure within an individual brain and a great deal of h.o.m.ology across species.... The astonis.h.i.+ng natural reconfigurability gives hope that the core algorithms of neurocomputation are independent of the specific sensory or motor modalities and that much of the observed variation in cortical structure across areas represents a refinement of a canonical circuit; it is indeed this canonical circuit we wish to reverse engineer."8 Allen argues in favor of an inherent "complexity brake that would necessarily limit progress in understanding the human brain and replicating its capabilities," based on his notion that each of the approximately 100 to 1,000 trillion connections in the human brain is there by explicit design. His "complexity brake" confuses the forest with the trees. If you want to understand, model, simulate, and re-create a pancreas, you don't need to re-create or simulate every organelle in every pancreatic islet cell. You would want instead to understand one islet cell, then abstract its basic functionality as it pertains to insulin control, and then extend that to a large group of such cells. This algorithm is well understood with regard to islet cells. There are now artificial pancreases that utilize this functional model being tested. Although there is certainly far more intricacy and variation in the brain than in the ma.s.sively repeated islet cells of the pancreas, there is nonetheless ma.s.sive repet.i.tion of functions, as I have described repeatedly in this book.

Critiques along the lines of Allen's also articulate what I call the "scientist's pessimism." Researchers working on the next generation of a technology or of modeling a scientific area are invariably struggling with that immediate set of challenges, so if someone describes what the technology will look like in ten generations, their eyes glaze over. One of the pioneers of integrated circuits was recalling for me recently the struggles to go from 10-micron (10,000 nanometers) feature sizes to 5-micron (5,000 nanometers) features over thirty years ago. The scientists were cautiously confident of reaching this goal, but when people predicted that someday we would actually have circuitry with feature sizes under 1 micron (1,000 nanometers), most of them, focused on their own goal, thought that was too wild to contemplate. Objections were made regarding the fragility of circuitry at that level of precision, thermal effects, and so on. Today Intel is starting to use chips with 22-nanometer gate lengths.

We witnessed the same sort of pessimism with respect to the Human Genome Project. Halfway through the fifteen-year effort, only 1 percent of the genome had been collected, and critics were proposing basic limits on how quickly it could be sequenced without destroying the delicate genetic structures. But thanks to the exponential growth in both capacity and price/performance, the project was finished seven years later. The project to reverse-engineer the human brain is making similar progress. It is only recently, for example, that we have reached a threshold with noninvasive scanning techniques so that we can see individual interneuronal connections forming and firing in real time. Much of the evidence I have presented in this book was dependent on such developments and has only recently been available.

Allen describes my proposal about reverse-engineering the human brain as simply scanning the brain to understand its fine structure and then simulating an entire brain "bottom up" without comprehending its information-processing methods. This is not my proposition. We do need to understand in detail how individual types of neurons work, and then gather information about how functional modules are connected. The functional methods that are derived from this type of a.n.a.lysis can then guide the development of intelligent systems. Basically, we are looking for biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions. From my own work in speech recognition, I know that our work was greatly accelerated when we gained insights as to how the brain prepares and transforms auditory information.

The way that the ma.s.sively redundant structures in the brain differentiate is through learning and experience. The current state of the art in AI does in fact enable systems to also learn from their own experience. The Google self-driving cars learn from their own driving experience as well as from data from Google cars driven by human drivers; Watson learned most of its knowledge by reading on its own. It is interesting to note that the methods deployed today in AI have evolved to be mathematically very similar to the mechanisms in the neocortex.

Another objection to the feasibility of "strong AI" (artificial intelligence at human levels and beyond) that is often raised is that the human brain makes extensive use of a.n.a.log computing, whereas digital methods inherently cannot replicate the gradations of value that a.n.a.log representations can embody. It is true that one bit is either on or off, but multiple-bit words easily represent multiple gradations and can do so to any desired degree of accuracy. This is, of course, done all the time in digital computers. As it is, the accuracy of a.n.a.log information in the brain (synaptic strength, for example) is only about one level within 256 levels that can be represented by eight bits.

In chapter 9 chapter 9 I cited Roger Penrose and Stuart Hameroff's objection, which concerned microtubules and quantum computing. Recall that they claim that the microtubule structures in neurons are doing quantum computing, and since it is not possible to achieve that in computers, the human brain is fundamentally different and presumably better. As I argued earlier, there is no evidence that neuronal microtubules are carrying out quantum computation. Humans in fact do a very poor job of solving the kinds of problems that a quantum computer would excel at (such as factoring large numbers). And if any of this proved to be true, there would be nothing barring quantum computing from also being used in our computers. I cited Roger Penrose and Stuart Hameroff's objection, which concerned microtubules and quantum computing. Recall that they claim that the microtubule structures in neurons are doing quantum computing, and since it is not possible to achieve that in computers, the human brain is fundamentally different and presumably better. As I argued earlier, there is no evidence that neuronal microtubules are carrying out quantum computation. Humans in fact do a very poor job of solving the kinds of problems that a quantum computer would excel at (such as factoring large numbers). And if any of this proved to be true, there would be nothing barring quantum computing from also being used in our computers.

John Searle is famous for introducing a thought experiment he calls "the Chinese room," an argument I discuss in detail in The Singularity Is Near The Singularity Is Near.9 In short, it involves a man who takes in written questions in Chinese and then answers them. In order to do this, he uses an elaborate rulebook. Searle claims that the man has no true understanding of Chinese and is not "conscious" of the language (as he does not understand the questions or the answers) despite his apparent ability to answer questions in Chinese. Searle compares this to a computer and concludes that a computer that could answer questions in Chinese (essentially pa.s.sing a Chinese Turing test) would, like the man in the Chinese room, have no real understanding of the language and no consciousness of what it was doing. In short, it involves a man who takes in written questions in Chinese and then answers them. In order to do this, he uses an elaborate rulebook. Searle claims that the man has no true understanding of Chinese and is not "conscious" of the language (as he does not understand the questions or the answers) despite his apparent ability to answer questions in Chinese. Searle compares this to a computer and concludes that a computer that could answer questions in Chinese (essentially pa.s.sing a Chinese Turing test) would, like the man in the Chinese room, have no real understanding of the language and no consciousness of what it was doing.

There are a few philosophical sleights of hand in Searle's argument. For one thing, the man in this thought experiment is comparable only to the central processing unit (CPU) of a computer. One could say that a CPU has no true understanding of what it is doing, but the CPU is only part of the structure. In Searle's Chinese room, it is the man with with his rulebook that const.i.tutes the whole system. That system does have an understanding of Chinese; otherwise it would not be capable of convincingly answering questions in Chinese, which would violate Searle's a.s.sumption for this thought experiment. his rulebook that const.i.tutes the whole system. That system does have an understanding of Chinese; otherwise it would not be capable of convincingly answering questions in Chinese, which would violate Searle's a.s.sumption for this thought experiment.

The attractiveness of Searle's argument stems from the fact that it is difficult today to infer true understanding and consciousness in a computer program. The problem with his argument, however, is that you can apply his own line of reasoning to the human brain itself. Each neocortical pattern recognizer-indeed, each neuron and each neuronal component-is following an algorithm. (After all, these are molecular mechanisms that follow natural law.) If we conclude that following an algorithm is inconsistent with true understanding and consciousness, then we would have to also conclude that the human brain does not exhibit these qualities either. You can take John Searle's Chinese room argument and simply subst.i.tute "manipulating interneuronal connections and synaptic strengths" for his words "manipulating symbols" and you will have a convincing argument to the effect that human brains cannot truly understand anything.

Another line of argument comes from the nature of nature, which has become a new sacred ground for many observers. For example, New Zealand biologist Michael Denton (born in 1943) sees a profound difference between the design principles of machines and those of biology. Denton writes that natural ent.i.ties are "self-organizing,...self-referential,...self-replicating,...reciprocal,...self-formative, and...holistic."10 He claims that such biological forms can only be created through biological processes and that these forms are thereby "immutable,...impenetrable, and...fundamental" realities of existence, and are therefore basically a different philosophical category from machines. He claims that such biological forms can only be created through biological processes and that these forms are thereby "immutable,...impenetrable, and...fundamental" realities of existence, and are therefore basically a different philosophical category from machines.

The reality, as we have seen, is that machines can be designed using these same principles. Learning the specific design paradigms of nature's most intelligent ent.i.ty-the human brain-is precisely the purpose of the brain reverse-engineering project. It is also not true that biological systems are completely "holistic," as Denton puts it, nor, conversely, do machines need to be completely modular. We have clearly identified hierarchies of units of functionality in natural systems, especially the brain, and AI systems are using comparable methods.

It appears to me that many critics will not be satisfied until computers routinely pa.s.s the Turing test, but even that threshold will not be clear-cut. Undoubtedly, there will be controversy as to whether claimed Turing tests that have been administered are valid. Indeed, I will probably be among those critics disparaging early claims along these lines. By the time the arguments about the validity of a computer pa.s.sing the Turing test do settle down, computers will have long since surpa.s.sed unenhanced human intelligence.

My emphasis here is on the word "unenhanced," because enhancement is precisely the reason that we are creating these "mind children," as Hans Moravec calls them.11 Combining human-level pattern recognition with the inherent speed and accuracy of computers will result in very powerful abilities. But this is not an alien invasion of intelligent machines from Mars-we are creating these tools to make ourselves smarter. I believe that most observers will agree with me that this is what is unique about the human species: We build these tools to extend our own reach. Combining human-level pattern recognition with the inherent speed and accuracy of computers will result in very powerful abilities. But this is not an alien invasion of intelligent machines from Mars-we are creating these tools to make ourselves smarter. I believe that most observers will agree with me that this is what is unique about the human species: We build these tools to extend our own reach.

EPILOGUE

The picture's pretty bleak, gentlemen...The world's climates are changing, the mammals are taking over, and we all have a brain about the size of a walnut.-Dinosaurs talking, in The Far Side The Far Side by Gary Larson by Gary Larson

Intelligence may be defined as the ability to solve problems with limited resources, in which a key such resource is time. Thus the ability to more quickly solve a problem like finding food or avoiding a predator reflects greater power of intellect. Intelligence evolved because it was useful for survival-a fact that may seem obvious, but one with which not everyone agrees. As practiced by our species, it has enabled us not only to dominate the planet but to steadily improve the quality of our lives. This latter point, too, is not apparent to everyone, given that there is a widespread perception today that life is only getting worse. For example, a Gallup poll released on May 4, 2011, revealed that only "44 percent of Americans believed that today's youth will have a better life than their parents."1 If we look at the broad trends, not only has human life expectancy quadrupled over the last millennium (and more than doubled in the last two centuries),2 but per capita GDP (in constant current dollars) has gone from hundreds of dollars in 1800 to thousands of dollars today, with even more p.r.o.nounced trends in the developed world. but per capita GDP (in constant current dollars) has gone from hundreds of dollars in 1800 to thousands of dollars today, with even more p.r.o.nounced trends in the developed world.3 Only a handful of democracies existed a century ago, whereas they are the norm today. For a historical perspective on how far we have advanced, I suggest people read Thomas Hobbes's Only a handful of democracies existed a century ago, whereas they are the norm today. For a historical perspective on how far we have advanced, I suggest people read Thomas Hobbes's Leviathan Leviathan (1651), in which he describes the "life of man" as "solitary, poor, nasty, brutish, and short." For a modern perspective, the recent book (1651), in which he describes the "life of man" as "solitary, poor, nasty, brutish, and short." For a modern perspective, the recent book Abundance Abundance (2012), by X-Prize Foundation founder (and cofounder with me of Singularity University) Peter Diamandis and science writer Steven Kotler, doc.u.ments the extraordinary ways in which life today has steadily improved in every dimension. Steven Pinker's recent (2012), by X-Prize Foundation founder (and cofounder with me of Singularity University) Peter Diamandis and science writer Steven Kotler, doc.u.ments the extraordinary ways in which life today has steadily improved in every dimension. Steven Pinker's recent The Better Angels of Our Nature: Why Violence Has Declined The Better Angels of Our Nature: Why Violence Has Declined (2011) painstakingly doc.u.ments the steady rise of peaceful relations between people and peoples. American lawyer, entrepreneur, and author Martine Rothblatt (born in 1954) doc.u.ments the steady improvement in civil rights, noting, for example, how in a couple of decades same-s.e.x marriage went from being legally recognized nowhere in the world to being legally accepted in a rapidly growing number of jurisdictions. (2011) painstakingly doc.u.ments the steady rise of peaceful relations between people and peoples. American lawyer, entrepreneur, and author Martine Rothblatt (born in 1954) doc.u.ments the steady improvement in civil rights, noting, for example, how in a couple of decades same-s.e.x marriage went from being legally recognized nowhere in the world to being legally accepted in a rapidly growing number of jurisdictions.4 A primary reason that people believe that life is getting worse is because our information about the problems of the world has steadily improved. If there is a battle today somewhere on the planet, we experience it almost as if we were there. During World War II, tens of thousands of people might perish in a battle, and if the public could see it at all it was in a grainy newsreel in a movie theater weeks later. During World War I a small elite could read about the progress of the conflict in the newspaper (without pictures). During the nineteenth century there was almost no access to news in a timely fas.h.i.+on for anyone.

The advancement we have made as a species due to our intelligence is reflected in the evolution of our knowledge, which includes our technology and our culture. Our various technologies are increasingly becoming information technologies, which inherently continue to progress in an exponential manner. It is through such technologies that we are able to address the grand challenges of humanity, such as maintaining a healthy environment, providing the resources for a growing population (including energy, food, and water), overcoming disease, vastly extending human longevity, and eliminating poverty. It is only by extending ourselves with intelligent technology that we can deal with the scale of complexity needed to address these challenges.

These technologies are not the vanguard of an intelligent invasion that will compete with and ultimately displace us. Ever since we picked up a stick to reach a higher branch, we have used our tools to extend our reach, both physically and mentally. That we can take a device out of our pocket today and access much of human knowledge with a few keystrokes extends us beyond anything imaginable by most observers only a few decades ago. The "cell phone" (the term is placed in quotes because it is vastly more than a phone) in my pocket is a million times less expensive yet thousands of times more powerful than the computer all the students and professors at MIT shared when I was an undergraduate there. That's a several billion-fold increase in price/performance over the last forty years, an escalation we will see again in the next twenty-five years, when what used to fit in a building, and now fits in your pocket, will fit inside a blood cell.

In this way we will merge with the intelligent technology we are creating. Intelligent nan.o.bots in our bloodstream will keep our biological bodies healthy at the cellular and molecular levels. They will go into our brains noninvasively through the capillaries and interact with our biological neurons, directly extending our intelligence. This is not as futuristic as it may sound. There are already blood cellsized devices that can cure type I diabetes in animals or detect and destroy cancer cells in the bloodstream. Based on the law of accelerating returns, these technologies will be a billion times more powerful within three decades than they are today.

I already consider the devices I use and the cloud of computing resources to which they are virtually connected as extensions of myself, and feel less than complete if I am cut off from these brain extenders. That is why the one-day strike by Google, Wikipedia, and thousands of other Web sites against the SOPA (Stop Online Piracy Act) on January 18, 2012, was so remarkable: I felt as if part of my brain were going on strike (although I and others did find ways to access these online resources). It was also an impressive demonstration of the political power of these sites as the bill-which looked as if it was headed for ratification-was instantly killed. But more important, it showed how thoroughly we have already outsourced parts of our thinking to the cloud of computing. It is already part of who we are. Once we routinely have intelligent nonbiological intelligence in our brains, this augmentation-and the cloud it is connected to-will continue to grow in capability exponentially.

The intelligence we will create from the reverse-engineering of the brain will have access to its own source code and will be able to rapidly improve itself in an accelerating iterative design cycle. Although there is considerable plasticity in the biological human brain, as we have seen, it does have a relatively fixed architecture, which cannot be significantly modified, as well as a limited capacity. We are unable to increase its 300 million pattern recognizers to, say, 400 million unless we do so nonbiologically. Once we can achieve that, there will be no reason to stop at a particular level of capability. We can go on to make it a billion pattern recognizers, or a trillion.

From quant.i.tative improvement comes qualitative advance. The most important evolutionary advance in h.o.m.o sapiens h.o.m.o sapiens was quant.i.tative: the development of a larger forehead to accommodate more neocortex. Greater neocortical capacity enabled this new species to create and contemplate thoughts at higher conceptual levels, resulting in the establishment of all the varied fields of art and science. As we add more neocortex in a nonbiological form, we can expect ever higher qualitative levels of abstraction. was quant.i.tative: the development of a larger forehead to accommodate more neocortex. Greater neocortical capacity enabled this new species to create and contemplate thoughts at higher conceptual levels, resulting in the establishment of all the varied fields of art and science. As we add more neocortex in a nonbiological form, we can expect ever higher qualitative levels of abstraction.

British mathematician Irvin J. Good, a colleague of Alan Turing's, wrote in 1965 that "the first ultraintelligent machine is the last invention that man need ever make." He defined such a machine as one that could surpa.s.s the "intellectual activities of any man however clever" and concluded that "since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion.'"

The last invention that biological evolution needed to make-the neocortex-is inevitably leading to the last invention that humanity needs to make-truly intelligent machines-and the design of one is inspiring the other. Biological evolution is continuing but technological evolution is moving a million times faster than the former. According to the law of accelerating returns, by the end of this century we will be able to create computation at the limits of what is possible, based on the laws of physics as applied to computation.5 We call matter and energy organized in this way "computronium," which is vastly more powerful pound per pound than the human brain. It will not just be raw computation but will be infused with intelligent algorithms const.i.tuting all of human-machine knowledge. Over time we will convert much of the ma.s.s and energy in our tiny corner of the galaxy that is suitable for this purpose to computronium. Then, to keep the law of accelerating returns going, we will need to spread out to the rest of the galaxy and universe. We call matter and energy organized in this way "computronium," which is vastly more powerful pound per pound than the human brain. It will not just be raw computation but will be infused with intelligent algorithms const.i.tuting all of human-machine knowledge. Over time we will convert much of the ma.s.s and energy in our tiny corner of the galaxy that is suitable for this purpose to computronium. Then, to keep the law of accelerating returns going, we will need to spread out to the rest of the galaxy and universe.

If the speed of light indeed remains an inexorable limit, then colonizing the universe will take a long time, given that the nearest star system to Earth is four light-years away. If there are even subtle means to circ.u.mvent this limit, our intelligence and technology will be sufficiently powerful to exploit them. This is one reason why the recent suggestion that the muons that traversed the 730 kilometers from the CERN accelerator on the Swiss-French border to the Gran Sa.s.so Laboratory in central Italy appeared to be moving faster than the speed of light was such potentially significant news. This particular observation appears to be a false alarm, but there are other possibilities to get around this limit. We do not even need to exceed the speed of light if we can find shortcuts to other apparently faraway places through spatial dimensions beyond the three with which we are familiar. Whether we are able to surpa.s.s or otherwise get around the speed of light as a limit will be the key strategic issue for the human-machine civilization at the beginning of the twenty-second century.

Please click Like and leave more comments to support and keep us alive.

RECENTLY UPDATED MANGA

How To Create A Mind Part 10 summary

You're reading How To Create A Mind. This manga has been translated by Updating. Author(s): Ray Kurzweil. Already has 680 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

BestLightNovel.com is a most smartest website for reading manga online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to BestLightNovel.com