BestLightNovel.com

The Singularity Is Near_ When Humans Transcend Biology Part 7

The Singularity Is Near_ When Humans Transcend Biology - BestLightNovel.com

You’re reading novel The Singularity Is Near_ When Humans Transcend Biology Part 7 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy

An obvious tactic is to make the nan.o.bot small enough to glide through the BBB, but this is the least practical approach, at least with nanotechnology as we envision it today. To do this, the nan.o.bot would have to be twenty nanometers or less in diameter, which is about the size of one hundred carbon atoms. Limiting a nan.o.bot to these dimensions would severely limit its functionality.An intermediate strategy would be to keep the nan.o.bot in the bloodstream but to have it project a robotic arm through the BBB and into the extracellular fluid that lines the neural cells. This would allow the nan.o.bot to remain large enough to have sufficient computational and navigational resources. Since almost all neurons lie within two or three cell-widths of a capillary, the arm would need to reach only up to about fifty microns. a.n.a.lyses conducted by Rob Freitas and others show that it is quite feasible to restrict the width of such a manipulator to under twenty nanometers.Another approach is to keep the nan.o.bots in the capillaries and use noninvasive scanning. For example, the scanning system being designed by Finkel and his a.s.sociates can scan at very high resolution (sufficient to see individual interconnections) to a depth of 150 microns, which is several times greater than we need. Obviously this type of optical-imaging system would have to be significantly miniaturized (compared to contemporary designs), but it uses charge-coupled device sensors, which are amenable to such size reduction.Another type of noninvasive scanning would involve one set of nan.o.bots emitting focused signals similar to those of a two-photon scanner and another set of nan.o.bots receiving the transmission. The topology of the intervening tissue could be determined by a.n.a.lyzing the impact on the received signal.Another type of strategy, suggested by Robert Freitas, would be for the nan.o.bot literally to barge its way past the BBB by breaking a hole in it, exit the blood vessel, and then repair the damage. Since the nan.o.bot can be constructed using carbon in a diamondoid configuration, it would be far stronger than biological tissues. Freitas writes, "To pa.s.s between cells in cell-rich tissue, it is necessary for an advancing nanorobot to disrupt some minimum number of cell-to-cell adhesive contacts that lie ahead in its path. After that, and with the objective of minimizing biointrusiveness, the nanorobot must reseal those adhesive contacts in its wake, crudely a.n.a.logous to a burrowing mole."46Yet another approach is suggested by contemporary cancer studies. Cancer researchers are keenly interested in selectively disrupting the BBB to transport cancer-destroying substances to tumors. Recent studies of the BBB show that it opens up in response to a variety of factors, which include certain proteins, as mentioned above; localized hypertension; high concentrations of certain substances; microwaves and other forms of radiation; infection; and inflammation. There are also specialized processes that ferry out needed substances such as glucose. It has also been found that the sugar mannitol causes a temporary shrinking of the tightly packed endothelial cells to provide a temporary breach of the BBB. By exploiting these mechanisms, several research groups are developing compounds that open the BBB.47 Although this research is aimed at cancer therapies, similar approaches can be used to open the gateways for nan.o.bots that will scan the brain as well as enhance our mental functioning. Although this research is aimed at cancer therapies, similar approaches can be used to open the gateways for nan.o.bots that will scan the brain as well as enhance our mental functioning.We could bypa.s.s the bloodstream and the BBB altogether by injecting the nan.o.bots into areas of the brain that have direct access to neural tissue. As I mention below, new neurons migrate from the ventricles to other parts of the brain. Nan.o.bots could follow the same migration path.Rob Freitas has described several techniques for nan.o.bots to monitor sensory signals.48 These will be important both for reverse engineering the inputs to the brain, as well as for creating full-immersion virtual reality from within the nervous system. These will be important both for reverse engineering the inputs to the brain, as well as for creating full-immersion virtual reality from within the nervous system.

To scan and monitor auditory signals, Freitas proposes "mobile nanodevices ... [that] swim into the spiral artery of the ear and down through its bifurcations to reach the cochlear ca.n.a.l, then position themselves as neural monitors in the vicinity of the spiral nerve fibers and the nerves entering the epithelium of the organ of Corti [cochlear or auditory nerves] within the spiral ganglion. These monitors can detect, record, or rebroadcast to other nanodevices in the communications network all auditory neural traffic perceived by the human ear." To scan and monitor auditory signals, Freitas proposes "mobile nanodevices ... [that] swim into the spiral artery of the ear and down through its bifurcations to reach the cochlear ca.n.a.l, then position themselves as neural monitors in the vicinity of the spiral nerve fibers and the nerves entering the epithelium of the organ of Corti [cochlear or auditory nerves] within the spiral ganglion. These monitors can detect, record, or rebroadcast to other nanodevices in the communications network all auditory neural traffic perceived by the human ear." For the body's "sensations of gravity, rotation, and acceleration," he envisions "nanomonitors positioned at the afferent nerve endings emanating from hair cells located in the ... semicircular ca.n.a.ls." For the body's "sensations of gravity, rotation, and acceleration," he envisions "nanomonitors positioned at the afferent nerve endings emanating from hair cells located in the ... semicircular ca.n.a.ls." For "kinesthetic sensory management ... motor neurons can be monitored to keep track of limb motions and positions, or specific muscle activities, and even to exert control." For "kinesthetic sensory management ... motor neurons can be monitored to keep track of limb motions and positions, or specific muscle activities, and even to exert control." "Olfactory and gustatory sensory neural traffic may be eavesdropped [on] by nanosensory instruments." "Olfactory and gustatory sensory neural traffic may be eavesdropped [on] by nanosensory instruments." "Pain signals may be recorded or modified as required, as can mechanical and temperature nerve impulses from ... receptors located in the skin." "Pain signals may be recorded or modified as required, as can mechanical and temperature nerve impulses from ... receptors located in the skin." Freitas points out that the retina is rich with small blood vessels, "permitting ready access to both photoreceptor (rod, cone, bipolar and ganglion) and integrator ... neurons." The signals from the optic nerve represent more than one hundred million levels per second, but this level of signal processing is already manageable. As MIT's Tomaso Poggio and others have indicated, we do not yet understand the coding of the optic nerve's signals. Once we have the ability to monitor the signals for each discrete fiber in the optic nerve, our ability to interpret these signals will be greatly facilitated. This is currently an area of intense research. Freitas points out that the retina is rich with small blood vessels, "permitting ready access to both photoreceptor (rod, cone, bipolar and ganglion) and integrator ... neurons." The signals from the optic nerve represent more than one hundred million levels per second, but this level of signal processing is already manageable. As MIT's Tomaso Poggio and others have indicated, we do not yet understand the coding of the optic nerve's signals. Once we have the ability to monitor the signals for each discrete fiber in the optic nerve, our ability to interpret these signals will be greatly facilitated. This is currently an area of intense research.

As I discuss below, the raw signals from the body go through multiple levels of processing before being aggregated in a compact dynamic representation in two small organs called the right and left insula, located deep in the cerebral cortex. For full-immersion virtual reality, it may be more effective to tap into the already-interpreted signals in the insula rather than the unprocessed signals throughout the body.

Scanning the brain for the purpose of reverse engineering its principles of operation is an easier action than scanning it for the purpose of "uploading" a particular personality, which I discuss further below (see the "Uploading the Human Brain" section, p. 198). In order to reverse engineer the brain, we only need to scan the connections in a region sufficiently to understand their basic pattern. We do not need to capture every single connection.

Once we understand the neural wiring patterns within a region, we can combine that knowledge with a detailed understanding of how each type of neuron in that region operates. Although a particular region of the brain may have billions of neurons, it will contain only a limited number of neuron types. We have already made significant progress in deriving the mechanisms underlying specific varieties of neurons and synaptic connections by studying these cells in vitro (in a test dish), as well as in vivo using such methods as two-photon scanning.



The scenarios above involve capabilities that exist at least in an early stage today. We already have technology capable of producing very high-resolution scans for viewing the precise shape of every connection in a particular brain area, if the scanner is physically proximate to the neural features. With regard to nan.o.bots, there are already four major conferences dedicated to developing blood cell-size devices for diagnostic and therapeutic purposes.49 As discussed in chapter 2, we can project the exponentially declining cost of computation and the rapidly declining size and increasing effectiveness of both electronic and mechanical technologies. Based on these projections, we can conservatively antic.i.p.ate the requisite nan.o.bot technology to implement these types of scenarios during the 2020s. Once nan.o.bot-based scanning becomes a reality, we will finally be in the same position that circuit designers are in today: we will be able to place highly sensitive and very high-resolution sensors (in the form of nan.o.bots) at millions or even billions of locations in the brain and thus witness in breathtaking detail living brains in action. As discussed in chapter 2, we can project the exponentially declining cost of computation and the rapidly declining size and increasing effectiveness of both electronic and mechanical technologies. Based on these projections, we can conservatively antic.i.p.ate the requisite nan.o.bot technology to implement these types of scenarios during the 2020s. Once nan.o.bot-based scanning becomes a reality, we will finally be in the same position that circuit designers are in today: we will be able to place highly sensitive and very high-resolution sensors (in the form of nan.o.bots) at millions or even billions of locations in the brain and thus witness in breathtaking detail living brains in action.

Building Models of the Brain

If we were magically shrunk and put into someone's brain while she was thinking, we would see all the pumps, pistons, gears and levers working away, and we would be able to describe their workings completely, in mechanical terms, thereby completely describing the thought processes of the brain. But that description would nowhere contain any mention of thought! It would contain nothing but descriptions of pumps, pistons, levers!-G. W. LEIBNIZ (16461716) How do ... fields express their principles? Physicists use terms like photons, electrons, quarks, quantum wave function, relativity, and energy conservation. Astronomers use terms like planets, stars, galaxies, Hubble s.h.i.+ft, and black holes. Thermodynamicists use terms like entropy, first law, second law, and Carnot cycle. Biologists use terms like phylogeny, ontogeny, DNA, and enzymes. Each of these terms is actually the t.i.tle of a story! The principles of a field are actually a set of interwoven stories about the structure and behavior of field elements.-PETER J. DENNING, PAST PRESIDENT OF THE a.s.sOCIATION FOR COMPUTING MACHINERY, IN "GREAT PRINCIPLES OF COMPUTING"

It is important that we build models of the brain at the right level. This is, of course, true for all of our scientific models. Although chemistry is theoretically based on physics and could be derived entirely from physics, this would be unwieldy and infeasible in practice. So chemistry uses its own rules and models. We should likewise, in theory, be able to deduce the laws of thermodynamics from physics, but this is a far-from-straightforward process. Once we have a sufficient number of particles to call something a gas rather than a bunch of particles, solving equations for each particle interaction becomes impractical, whereas the laws of thermodynamics work extremely well. The interactions of a single molecule within the gas are hopelessly complex and unpredictable, but the gas itself, comprising trillions of molecules, has many predictable properties.

Similarly, biology, which is rooted in chemistry, uses its own models. It is often unnecessary to express higher-level results using the intricacies of the dynamics of the lower-level systems, although one has to thoroughly understand the lower level before moving to the higher one. For example, we can control certain genetic features of an animal by manipulating its fetal DNA without necessarily understanding all of the biochemical mechanisms of DNA, let alone the interactions of the atoms in the DNA molecule.

Often, the lower level is more complex. A pancreatic islet cell, for example, is enormously complicated, in terms of all its biochemical functions (most of which apply to all human cells, some to all biological cells). Yet modeling what a pancreas does-with its millions of cells-in terms of regulating levels of insulin and digestive enzymes, although not simple, is considerably less difficult than formulating a detailed model of a single islet cell.

The same issue applies to the levels of modeling and understanding in the brain, from the physics of synaptic reactions up to the transformations of information by neural cl.u.s.ters. In those brain regions for which we have succeeded in developing detailed models, we find a phenomenon similar to that involving pancreatic cells. The models are complex but remain simpler than the mathematical descriptions of a single cell or even a single synapse. As we discussed earlier, these region-specific models also require significantly less computation than is theoretically implied by the computational capacity of all of the synapses and cells.

Gilles Laurent of the California Inst.i.tute of Technology observes, "In most cases, a system's collective behavior is very difficult to deduce from knowledge of its components....[Neuroscience is ... a science of systems in which first-order and local explanatory schemata are needed but not sufficient." Brain reverse-engineering will proceed by iterative refinement of both top-to-bottom and bottom-to-top models and simulations, as we refine each level of description and modeling.

Until very recently neuroscience was characterized by overly simplistic models limited by the crudeness of our sensing and scanning tools. This led many observers to doubt whether our thinking processes were inherently capable of understanding themselves. Peter D. Kramer writes, "If the mind were simple enough for us to understand, we would be too simple to understand it."50 Earlier, I quoted Douglas Hofstadter's comparison of our brain to that of a giraffe, the structure of which is not that different from a human brain but which clearly does not have the capability of understanding its own methods. However, recent success in developing highly detailed models at various levels-from neural components such as synapses to large neural regions such as the cerebellum-demonstrate that building precise mathematical models of our brains and then simulating these models with computation is a challenging but viable task once the data capabilities become available. Although models have a long history in neuroscience, it is only recently that they have become sufficiently comprehensive and detailed to allow simulations based on them to perform like actual brain experiments. Earlier, I quoted Douglas Hofstadter's comparison of our brain to that of a giraffe, the structure of which is not that different from a human brain but which clearly does not have the capability of understanding its own methods. However, recent success in developing highly detailed models at various levels-from neural components such as synapses to large neural regions such as the cerebellum-demonstrate that building precise mathematical models of our brains and then simulating these models with computation is a challenging but viable task once the data capabilities become available. Although models have a long history in neuroscience, it is only recently that they have become sufficiently comprehensive and detailed to allow simulations based on them to perform like actual brain experiments.

Subneural Models: Synapses and Spines

In an address to the annual meeting of the American Psychological a.s.sociation in 2002, psychologist and neuroscientist Joseph LeDoux of New York University said,

If who we are is shaped by what we remember, and if memory is a function of the brain, then synapses-the interfaces through which neurons communicate with each other and the physical structures in which memories are encoded-are the fundamental units of the self....Synapses are pretty low on the totem pole of how the brain is organized, but I think they're pretty important....The self is the sum of the brain's individual subsystems, each with its own form of "memory," together with the complex interactions among the subsystems. Without synaptic plasticity-the ability of synapses to alter the ease with which they transmit signals from one neuron to another-the changes in those systems that are required for learning would be impossible.51

Although early modeling treated the neuron as the primary unit of transforming information, the tide has turned toward emphasizing its subcellular components. Computational neuroscientist Anthony J. Bell, for example, argues:

Molecular and biophysical processes control the sensitivity of neurons to incoming spikes (both synaptic efficiency and post-synaptic responsivity), the excitability of the neuron to produce spikes, the patterns of 170 spikes it can produce and the likelihood of new synapses forming (dynamic rewiring), to list only four of the most obvious interferences from the subneural level. Furthermore, transneural volume effects such as local electric fields and the transmembrane diffusion of nitric oxide have been seen to influence, responsively, coherent neural firing, and the delivery of energy (blood flow) to cells, the latter of which directly correlates with neural activity. The list could go on. I believe that anyone who seriously studies neuromodulators, ion channels, or synaptic mechanism and is honest, would have to reject the neuron level as a separate computing level, even while finding it to be a useful descriptive level.52

Indeed, an actual brain synapse is far more complex than is described in the cla.s.sic McCulloch-Pitts neural-net model. The synaptic response is influenced by a range of factors, including the action of multiple channels controlled by a variety of ionic potentials (voltages) and multiple neurotransmitters and neuromodulators. Considerable progress has been made in the past twenty years, however, in developing the mathematical formulas underlying the behavior of neurons, dendrites, synapses, and the representation of information in the spike trains (pulses by neurons that have been activated). Peter Dayan and Larry Abbott have recently written a summary of the existing nonlinear differential equations that describe a wide range of knowledge derived from thousands of experimental studies.53 Well-substantiated models exist for the biophysics of neuron bodies, synapses, and the action of feedforward networks of neurons, such as those found in the retina and optic nerves, and many other cla.s.ses of neurons. Well-substantiated models exist for the biophysics of neuron bodies, synapses, and the action of feedforward networks of neurons, such as those found in the retina and optic nerves, and many other cla.s.ses of neurons.

Attention to how the synapse works has its roots in Hebb's pioneering work. Hebb addressed the question, How does short-term (also called working) memory function? The brain region a.s.sociated with short-term memory is the prefrontal cortex, although we now realize that different forms of short-term information retention have been identified in most other neural circuits that have been closely studied.

Most of Hebb's work focused on changes in the state of synapses to strengthen or inhibit received signals and on the more controversial reverberatory circuit in which neurons fire in a continuous loop.54 Another theory proposed by Hebb is a change in state of a neuron itself-that is, a memory function in the cell soma (body). The experimental evidence supports the possibility of all of these models. Cla.s.sical Hebbian synaptic memory and reverberatory memory require a time delay before the recorded information can be used. In vivo experiments show that in at least some regions of the brain there is a neural response that is too fast to be accounted for by such standard learning models, and therefore could only be accomplished by learning-induced changes in the soma. Another theory proposed by Hebb is a change in state of a neuron itself-that is, a memory function in the cell soma (body). The experimental evidence supports the possibility of all of these models. Cla.s.sical Hebbian synaptic memory and reverberatory memory require a time delay before the recorded information can be used. In vivo experiments show that in at least some regions of the brain there is a neural response that is too fast to be accounted for by such standard learning models, and therefore could only be accomplished by learning-induced changes in the soma.55 Another possibility not directly antic.i.p.ated by Hebb is real-time changes in the neuron connections themselves. Recent scanning results show rapid growth of dendrite spikes and new synapses, so this must be considered an important mechanism. Experiments have also demonstrated a rich array of learning behaviors on the synaptic level that go beyond simple Hebbian models. Synapses can change their state rapidly, but they then begin to decay slowly with continued stimulation, or in some a lack of stimulation, or many other variations.56 Although contemporary models are far more complex than the simple synapse models devised by Hebb, his intuitions have largely proved correct. In addition to Hebbian synaptic plasticity, current models include global processes that provide a regulatory function. For example, synaptic scaling keeps synaptic potentials from becoming zero (and thus being unable to be increased through multiplicative approaches) or becoming excessively high and thereby dominating a network. In vitro experiments have found synaptic scaling in cultured networks of neocortical, hippocampal, and spinal-cord neurons.57 Other mechanisms are sensitive to overall spike timing and the distribution of potential across many synapses. Simulations have demonstrated the ability of these recently discovered mechanisms to improve learning and network stability. Other mechanisms are sensitive to overall spike timing and the distribution of potential across many synapses. Simulations have demonstrated the ability of these recently discovered mechanisms to improve learning and network stability.

The most exciting new development in our understanding of the synapse is that the topology of the synapses and the connections they form are continually changing. Our first glimpse into the rapid changes in synaptic connections was revealed by an innovative scanning system that requires a genetically modified animal whose neurons have been engineered to emit a fluorescent green light. The system can image living neural tissue and has a sufficiently high resolution to capture not only the dendrites (interneuronal connections) but the spines: tiny projections that sprout from the dendrites and initiate potential synapses.

Neurobiologist Karel Svoboda and his colleagues at Cold Spring Harbor Laboratory on Long Island used the scanning system on mice to investigate networks of neurons that a.n.a.lyze information from the whiskers, a study that provided a fascinating look at neural learning. The dendrites continually grew new spines. Most of these lasted only a day or two, but on occasion a spine would remain stable. "We believe that the high turnover that we see might play an important role in neural plasticity, in that the sprouting spines reach out to probe different presynaptic partners on neighboring neurons," said Svoboda. "If a given connection is favorable, that is, reflecting a desirable kind of brain rewiring, then these synapses are stabilized and become more permanent. But most of these synapses are not going in the right direction, and they are retracted."58 Another consistent phenomenon that has been observed is that neural responses decrease over time, if a particular stimulus is repeated. This adaptation gives greatest priority to new patterns of stimuli. Similar work by neurobiologist Wen-Biao Gan at New York University's School of Medicine on neuronal spines in the visual cortex of adult mice shows that this spine mechanism can hold long-term memories: "Say a 10-year-old kid uses 1,000 connections to store a piece of information. When he is 80, one-quarter of the connections will still be there, no matter how things change. That's why you can still remember your childhood experiences." Gan also explains, "Our idea was that you actually don't need to make many new synapses and get rid of old ones when you learn, memorize. You just need to modify the strength of the preexisting synapses for short-term learning and memory. However, it's likely that a few synapses are made or eliminated to achieve long-term memory."59 The reason memories can remain intact even if three quarters of the connections have disappeared is that the coding method used appears to have properties similar to those of a hologram. In a hologram, information is stored in a diffuse pattern throughout an extensive region. If you destroy three quarters of the hologram, the entire image remains intact, although with only one quarter of the resolution. Research by Pentti Kanerva, a neuroscientist at Redwood Neuroscience Inst.i.tute, supports the idea that memories are dynamically distributed throughout a region of neurons. This explains why older memories persist but nonetheless appear to "fade," because their resolution has diminished.

Neuron Models

Researchers are also discovering that specific neurons perform special recognition tasks. An experiment with chickens identified brain-stem neurons that detect particular delays as sounds arrive at the two ears.60 Different neurons respond to different amounts of delay. Although there are many complex irregularities in how these neurons (and the networks they rely on) work, what they are actually accomplis.h.i.+ng is easy to describe and would be simple to replicate. According to University of California at San Diego neuroscientist Scott Makeig, "Recent neurobiological results suggest an important role of precisely synchronized neural inputs in learning and memory." Different neurons respond to different amounts of delay. Although there are many complex irregularities in how these neurons (and the networks they rely on) work, what they are actually accomplis.h.i.+ng is easy to describe and would be simple to replicate. According to University of California at San Diego neuroscientist Scott Makeig, "Recent neurobiological results suggest an important role of precisely synchronized neural inputs in learning and memory."61

Electronic Neurons. A recent experiment at the University of California at San Diego's Inst.i.tute for Nonlinear Science demonstrates the potential for electronic neurons to precisely emulate biological ones. Neurons (biological or otherwise) are a prime example of what is often called chaotic computing. Each neuron acts in an essentially unpredictable fas.h.i.+on. When an entire network of neurons receives input (from the outside world or from other networks of neurons), the signaling among them appears at first to be frenzied and random. Over time, typically a fraction of a second or so, the chaotic interplay of the neurons dies down and a stable pattern of firing emerges. This pattern represents the "decision" of the neural network. If the neural network is performing a pattern-recognition task (and such tasks const.i.tute the bulk of the activity in the human brain), the emergent pattern represents the appropriate recognition. . A recent experiment at the University of California at San Diego's Inst.i.tute for Nonlinear Science demonstrates the potential for electronic neurons to precisely emulate biological ones. Neurons (biological or otherwise) are a prime example of what is often called chaotic computing. Each neuron acts in an essentially unpredictable fas.h.i.+on. When an entire network of neurons receives input (from the outside world or from other networks of neurons), the signaling among them appears at first to be frenzied and random. Over time, typically a fraction of a second or so, the chaotic interplay of the neurons dies down and a stable pattern of firing emerges. This pattern represents the "decision" of the neural network. If the neural network is performing a pattern-recognition task (and such tasks const.i.tute the bulk of the activity in the human brain), the emergent pattern represents the appropriate recognition. .

So the question addressed by the San Diego researchers was: could electronic neurons engage in this chaotic dance alongside biological ones? They connected artificial neurons with real neurons from spiny lobsters in a single network, and their hybrid biological-nonbiological network performed in the same way (that is, chaotic interplay followed by a stable emergent pattern) and with the same type of results as an all-biological net of neurons. Essentially, the biological neurons accepted their electronic peers. This indicates that the chaotic mathematical model of these neurons was reasonably accurate.

Brain Plasticity

In 1861 French neurosurgeon Paul Broca correlated injured or surgically affected regions of the brain with certain lost skills, such as fine motor skills or language ability. For more than a century scientists believed these regions were hardwired for specific tasks. Although certain brain areas do tend to be used for particular types of skills, we now understand that such a.s.signments can be changed in response to brain injury such as a stroke. In a cla.s.sic 1965 study, D. H. Hubel and T. N. Wiesel showed that extensive and far-reaching reorganization of the brain could take place after damage to the nervous system, such as from a stroke.62 Moreover, the detailed arrangement of connections and synapses in a given region is a direct product of how extensively that region is used. As brain scanning has attained sufficiently high resolution to detect dendritic-spine growth and the formation of new synapses, we can see our brain grow and adapt to literally follow our thoughts. This gives new shades of meaning to Descartes' dictum "I think therefore I am."

In one experiment conducted by Michael Merzenich and his colleagues at the University of California at San Francisco, monkeys' food was placed in such a position that the animals had to dexterously manipulate one finger to obtain it. Brain scans before and after revealed dramatic growth in the interneuronal connections and synapses in the region of the brain responsible for controlling that finger.

Edward Taub at the University of Alabama studied the region of the cortex responsible for evaluating the tactile input from the fingers. Comparing nonmusicians to experienced players of stringed instruments, he found no difference in the brain regions devoted to the fingers of the right hand but a huge difference for the fingers of the left hand. If we drew a picture of the hands based on the amount of brain tissue devoted to a.n.a.lyzing touch, the musicians' fingers on their left hand (which are used to control the strings) would be huge. Although the difference was greater for those musicians who began musical training with a stringed instrument as children, "even if you take up the violin at 40," Taub commented, "you still get brain reorganization."63 A similar finding comes from an evaluation of a software program, developed by Paula Tallal and Steve Miller at Rutgers University, called Fast ForWord, that a.s.sists dyslexic students. The program reads text to children, slowing down staccato phonemes such as "b" and "p," based on the observation that many dyslexic students are unable to perceive these sounds when spoken quickly. Being read to with this modified form of speech has been shown to help such children learn to read. Using fMRI scanning John Gabrieli of Stanford University found that the left prefrontal region of the brain, an area a.s.sociated with language processing, had indeed grown and showed greater activity in dyslexic students using the program. Says Tallal, "You create your brain from the input you get."

It is not even necessary to express one's thoughts in physical action to provoke the brain to rewire itself. Dr. Alvaro Pascual-Leone at Harvard University scanned the brains of volunteers before and after they practiced a simple piano exercise. The brain motor cortex of the volunteers changed as a direct result of their practice. He then had a second group just think about doing the piano exercise but without actually moving any muscles. This produced an equally p.r.o.nounced change in the motor-cortex network.64 Recent fMRI studies of learning visual-spatial relations.h.i.+ps found that interneuronal connections are able to change rapidly during the course of a single learning session. Researchers found changes in the connections between posterior parietal-cortex cells in what is called the "dorsal" pathway (which contains information about location and spatial properties of visual stimuli) and posterior inferior-temporal cortex cells in the "ventral" pathway (which contains recognized invariant features of varying levels of abstraction);65 significantly, that rate of change was directly proportional to the rate of learning. significantly, that rate of change was directly proportional to the rate of learning.66 Researchers at the University of California at San Diego reported a key insight into the difference in the formation of short-term and long-term memories. Using a high-resolution scanning method, the scientists were able to see chemical changes within synapses in the hippocampus, the brain region a.s.sociated with the formation of long-term memories.67 They discovered that when a cell was first stimulated, actin, a neurochemical, moved toward the neurons to which the synapse was connected. This also stimulated the actin in neighboring cells to move away from the activated cell. These changes lasted only a few minutes, however. If the stimulations were sufficiently repeated, then a more significant and permanent change took place. They discovered that when a cell was first stimulated, actin, a neurochemical, moved toward the neurons to which the synapse was connected. This also stimulated the actin in neighboring cells to move away from the activated cell. These changes lasted only a few minutes, however. If the stimulations were sufficiently repeated, then a more significant and permanent change took place.

"The short-term changes are just part of the normal way the nerve cells talk to each other," lead author Michael A. Colicos said.

The long-term changes in the neurons occur only after the neurons are stimulated four times over the course of an hour. The synapse will actually split and new synapses will form, producing a permanent change that will presumably last for the rest of your life. The a.n.a.logy to human memory is that when you see or hear something once, it might stick in your mind for a few minutes. If it's not important, it fades away and you forget it 10 minutes later. But if you see or hear it again and this keeps happening over the next hour, you are going to remember it for a much longer time. And things that are repeated many times can be remembered for an entire lifetime. Once you take an axon and form two new connections, those connections are very stable and there's no reason to believe that they'll go away. That's the kind of change one would envision lasting a whole lifetime.

"It's like a piano lesson," says coauthor and professor of biology Yukiko G.o.da. "If you playa musical score over and over again, it becomes ingrained in your memory." Similarly, in an article in Science neuroscientists S. Lowel and W. Singer report having found evidence for rapid dynamic formation of new interneuronal connections in the visual cortex, which they described with Donald Hebb's phrase "What fires together wires together."68 Another insight into memory formation is reported in a study published in Cell. Researchers found that the CPEB protein actually changes its shape in synapses to record mernories.69 The surprise was that CPEB performs this memory function while in a prion state. The surprise was that CPEB performs this memory function while in a prion state.

"For a while we've known quite a bit about how memory works, but we've had no clear concept of what the key storage device is," said coauthor and Whitehead Inst.i.tute for Biomedical Research director Susan Lindquist. "This study suggests what the storage device might be-but it's such a surprising suggestion to find that a prion-like activity may be involved....It ... indicates that prions aren't just oddb.a.l.l.s of nature but might partic.i.p.ate in fundamental processes." As I reported in chapter 3, human engineers are also finding prions to be a powerful means of building electronic memories.

Brain-scanning studies are also revealing mechanisms to inhibit unneeded and undesirable memories, a finding that would gratify Sigmund Freud.70 Using fMRI, Stanford University scientists asked study subjects to attempt to forget information that they had earlier memorized. During this activity, regions in the frontal cortex that have been a.s.sociated with memory repression showed a high level of activity, while the hippocampus, the region normally a.s.sociated with remembering, was relatively inactive. These findings "confirm the existence of an active forgetting process and establish a neurobiological model for guiding inquiry into motivated forgetting," wrote Stanford psychology professor John Gabrieli and his colleagues. Gabrieli also commented, "The big news is that we've shown how the human brain blocks an unwanted memory, that there is such a mechanism, and it has a biological basis. It gets you past the possibility that there's nothing in the brain that would suppress a memory-that it was all a misunderstood fiction." Using fMRI, Stanford University scientists asked study subjects to attempt to forget information that they had earlier memorized. During this activity, regions in the frontal cortex that have been a.s.sociated with memory repression showed a high level of activity, while the hippocampus, the region normally a.s.sociated with remembering, was relatively inactive. These findings "confirm the existence of an active forgetting process and establish a neurobiological model for guiding inquiry into motivated forgetting," wrote Stanford psychology professor John Gabrieli and his colleagues. Gabrieli also commented, "The big news is that we've shown how the human brain blocks an unwanted memory, that there is such a mechanism, and it has a biological basis. It gets you past the possibility that there's nothing in the brain that would suppress a memory-that it was all a misunderstood fiction."

In addition to generating new connections between neurons, the brain also makes new neurons from neural stem cells, which replicate to maintain a reservoir of themselves. In the course of reproducing, some of the neural stem cells become "neural precursor" cells, which in turn mature into two types of support cells called astrocytes and oliG.o.dendrocytes, as well as neurons. The cells further evolve into specific types of neurons. However, this differentiation cannot take place unless the neural stem cells move away from their original source in the brain's ventricles. Only about half of the neural cells successfully make the journey, which is similar to the process during gestation and early childhood in which only a portion of the early brain's developing neurons survive. Scientists hope to bypa.s.s this neural migration process by injecting neural stem cells directly into target regions, as well as to create drugs that promote this process of neurogenesis (creating new neurons) to repair brain damage from injury or disease.71 An experiment by genetics researchers Fred Gage, G. Kempermann, and Henriette van Praag at the Salk Inst.i.tute for Biological Studies showed that neurogenesis is actually stimulated by our experience. Moving mice from a sterile, uninteresting cage to a stimulating one approximately doubled the number of dividing cells in their hippocampus regions.72

Modeling Regions of the Brain Most probably the human brain is, in the main, composed of large numbers of relatively small distributed systems, arranged by embryology into a complex society that is controlled in part (but only in part) by serial, symbolic systems that are added later. But the subsymbolic systems that do most of the work from underneath must, by their very character, block all the other parts of the brain from knowing much about how they work. And this, itself, could help explain how people do so many things yet have such incomplete ideas on how those things are actually done.-MARVIN MINSKY AND SEYMOUR PAPERT73 Common sense is not a simple thing. Instead, it is an immense society of hard-earned practical ideas-of mult.i.tudes of life-learned rules and exceptions, dispositions and tendencies, balances and checks.-MARVIN MINSKY

In addition to new insights into the plasticity of organization of each brain region, researchers are rapidly creating detailed models of particular regions of the brain. These neuromorphic models and simulations lag only slightly behind the availability of the information on which they are based. The rapid success of turning the detailed data from studies of neurons and the interconnection data from neural scanning into effective models and working simulations belies often-stated skepticism about our inherent capability of understanding our own brains.

Modeling human-brain functionality on a nonlinearity-by-nonlinearity and synapse-by-synapse basis is generally not necessary. Simulations of regions that store memories and skills in individual neurons and connections (for example, the cerebellum) do make use of detailed cellular models. Even for these regions, however, simulations require far less computation than is implied by all of the neural components. This is true of the cerebellum simulation described below.

Although there is a great deal of detailed complexity and nonlinearity in the subneural parts of each neuron, as well as a chaotic, semirandom wiring pattern underlying the trillions of connections in the brain, significant progress has been made over the past twenty years in the mathematics of modeling such adaptive nonlinear systems. Preserving the exact shape of every dendrite and the precise "squiggle" of every interneuronal connection is generally not necessary. We can understand the principles of operation of extensive regions of the brain by examining their dynamics at the appropriate level of a.n.a.lysis.

We have already had significant success in creating models and simulations of extensive brain regions. Applying tests to these simulations and comparing the data to that obtained from psychophysical experiments on actual human brains have produced impressive results. Given the relative crudeness of our scanning and sensing tools to date, the success in modeling, as ill.u.s.trated by the following works in progress, demonstrates the ability to extract the right insights from the ma.s.s of data being gathered. The following are only a few examples of successful models of brain regions, all works in progress.

A Neuromorphic Model: The Cerebellum

A question I examined in The Age of Spiritual Machines The Age of Spiritual Machines is: how does a ten-year-old manage to catch a fly ball? is: how does a ten-year-old manage to catch a fly ball?74 All that a child can see is the ball's trajectory from his position in the outfield. To actually infer the path of the ball in three-dimensional s.p.a.ce would require solving difficult simultaneous differential equations. Additional equations would need to be solved to predict the future course of the ball, and more equations to translate these results into what was required of the player's own movements. How does a young outfielder accomplish all of this in a few seconds with no computer and no training in differential equations? Clearly, he is not solving equations consciously, but how does his brain solve the problem? All that a child can see is the ball's trajectory from his position in the outfield. To actually infer the path of the ball in three-dimensional s.p.a.ce would require solving difficult simultaneous differential equations. Additional equations would need to be solved to predict the future course of the ball, and more equations to translate these results into what was required of the player's own movements. How does a young outfielder accomplish all of this in a few seconds with no computer and no training in differential equations? Clearly, he is not solving equations consciously, but how does his brain solve the problem?

Since ASM was published, we have advanced considerably in understanding this basic process of skill formation. As I had hypothesized, the problem is not solved by building a mental model of three-dimensional motion. Rather, the problem is collapsed by directly translating the observed movements of the ball into the appropriate movement of the player and changes in the configuration of his arms and legs. Alexandre Pouget of the University of Rochester and Lawrence H. Snyder of Was.h.i.+ngton University have described mathematical "basis functions" that can represent this direct transformation of perceived movement in the visual field to required movements of the muscles.75 Furthermore, a.n.a.lysis of recently developed models of the functioning of the cerebellum demonstrate that our cerebellar neural circuits are indeed capable of learning and then applying the requisite basis functions to implement these sensorimotor transformations. When we engage in the trial-and-error process of learning to perform a sensorimotor task, such as catching a fly ball, we are training the synaptic potentials of the cerebellar synapses to learn the appropriate basis functions. The cerebellum performs two types of transformations with these basis functions: going from a desired result to an action (called "inverse internal models") and going from a possible set of actions to an antic.i.p.ated result ("forward internal models"). Tomaso Poggio has pointed out that the idea of basis functions may describe learning processes in the brain that go beyond motor control. Furthermore, a.n.a.lysis of recently developed models of the functioning of the cerebellum demonstrate that our cerebellar neural circuits are indeed capable of learning and then applying the requisite basis functions to implement these sensorimotor transformations. When we engage in the trial-and-error process of learning to perform a sensorimotor task, such as catching a fly ball, we are training the synaptic potentials of the cerebellar synapses to learn the appropriate basis functions. The cerebellum performs two types of transformations with these basis functions: going from a desired result to an action (called "inverse internal models") and going from a possible set of actions to an antic.i.p.ated result ("forward internal models"). Tomaso Poggio has pointed out that the idea of basis functions may describe learning processes in the brain that go beyond motor control.76 The gray and white, baseball-sized, bean-shaped brain region called the cerebellum sits on the brain stem and comprises more than half of the brain's neurons. It provides a wide range of critical functions, including sensorimotor coordination, balance, control of movement tasks, and the ability to antic.i.p.ate the results of actions (our own as well as those of other objects and persons).77 Despite its diversity of functions and tasks, its synaptic and cell organization is extremely consistent, involving only several types of neurons. There appears to be a specific type of computation that it accomplishes. Despite its diversity of functions and tasks, its synaptic and cell organization is extremely consistent, involving only several types of neurons. There appears to be a specific type of computation that it accomplishes.78 Despite the uniformity of the cerebellum's information processing, the broad range of its functions can be understood in terms of the variety of inputs it receives from the cerebral cortex (via the brain-stem nuclei and then through the cerebellum's mossy fiber cells) and from other regions (particularly the "inferior olive" region of the brain via the cerebellum's climbing fiber cells). The cerebellum is responsible for our understanding of the timing and sequencing of sensory inputs as well as controlling our physical movements .

The cerebellum is also an example of how the brain's considerable capacity greatly exceeds its compact genome. Most of the genome that is devoted to the brain describes the detailed structure of each type of neural cell (including its dendrites, spines, and synapses) and how these structures respond to stimulation and change. Relatively little genomic code is responsible for the actual "wiring." In the cerebellum, the basic wiring method is repeated billions of times. It is clear that the genome does not provide specific information about each repet.i.tion of this cerebellar structure but rather specifies certain constraints as to how this structure is repeated (just as the genome does not specify the exact location of cells in other organs).

Some of the outputs of the cerebellum go to about two hundred thousand alpha motor neurons, which determine the final signals to the body's approximately six hundred muscles. Inputs to the alpha motor neurons do not directly specify the movements of each of these muscles but are coded in a more compact, as yet poorly understood, fas.h.i.+on. The final signals to the muscles are determined at lower levels of the nervous system, specifically in the brain stem and spinal cord.79 Interestingly, this organization is taken to an extreme in the octopus, the central nervous system of which apparently sends very high-level commands to each of its arms (such as "grasp that object and bring it closer"), leaving it up to an independent peripheral nervous system in each arm to carry out the mission. Interestingly, this organization is taken to an extreme in the octopus, the central nervous system of which apparently sends very high-level commands to each of its arms (such as "grasp that object and bring it closer"), leaving it up to an independent peripheral nervous system in each arm to carry out the mission.80 A great deal has been learned in recent years about the role of the cerebellum's three princ.i.p.al nerve types. Neurons called "climbing fibers" appear to provide signals to train the cerebellum. Most of the output of the cerebellum comes from the large Purkinje cells (named for Johannes Purkinje, who identified the cell in 1837), each of which receives about two hundred thousand inputs (synapses), compared to the average of about one thousand for a typical neuron. The inputs come largely from the granule cells, which are the smallest neurons, packed about six million per square millimeter. Studies of the role of the cerebellum during the learning of handwriting movements by children show that the Purkinje cells actually sample the sequence of movements, with each one sensitive to a specific sample.81 Obviously, the cerebellum requires continual perceptual guidance from the visual cortex. The researchers were able to link the structure of cerebellum cells to the observation that there is an inverse relations.h.i.+p between curvature and speed when doing handwriting that is, you can write faster by drawing straight lines instead of detailed curves for each letter. Obviously, the cerebellum requires continual perceptual guidance from the visual cortex. The researchers were able to link the structure of cerebellum cells to the observation that there is an inverse relations.h.i.+p between curvature and speed when doing handwriting that is, you can write faster by drawing straight lines instead of detailed curves for each letter.

Detailed cell studies and animal studies have provided us with impressive mathematical descriptions of the physiology and organization of the synapses of the cerebellurn,82 as well as of the coding of information in its inputs and outputs, and of the transformations perforrned. as well as of the coding of information in its inputs and outputs, and of the transformations perforrned.83 Gathering data from multiple studies, Javier F. Medina, Michael D. Mauk, and their colleagues at the University of Texas Medical School devised a detailed bottom-up simulation of the cerebellum. It features more than ten thousand simulated neurons and three hundred thousand synapses, and it includes all of the princ.i.p.al types of cerebellum cells. Gathering data from multiple studies, Javier F. Medina, Michael D. Mauk, and their colleagues at the University of Texas Medical School devised a detailed bottom-up simulation of the cerebellum. It features more than ten thousand simulated neurons and three hundred thousand synapses, and it includes all of the princ.i.p.al types of cerebellum cells.84 The connections of the cells and synapses are determined by a computer, which "wires" the simulated cerebellar region by following constraints and rules, similar to the stochastic (random within restrictions) method used to wire the actual human brain from its genetic code. The connections of the cells and synapses are determined by a computer, which "wires" the simulated cerebellar region by following constraints and rules, similar to the stochastic (random within restrictions) method used to wire the actual human brain from its genetic code.85 It would not be difficult to expand the University of Texas cerebellar simulation to a larger number of synapses and cells. It would not be difficult to expand the University of Texas cerebellar simulation to a larger number of synapses and cells.

The Texas researchers applied a cla.s.sical learning experiment to their simulation and compared the results to many similar experiments on actual human conditioning. In the human studies, the task involved a.s.sociating an auditory tone with a puff of air applied to the eyelid, which causes the eyelid to close. If the puff of air and the tone are presented together for one hundred to two hundred trials, the subject will learn the a.s.sociation and close the subject's eye upon merely hearing the tone. If the tone is then presented many times without the air puff, the subject ultimately learns to disa.s.sociate the two stimuli (to "extinguish" the response), so the learning is bidirectional. After tuning a variety of parameters, the simulation provided a reasonable match to experimental results on human and animal cerebellar conditioning. Interestingly, the researchers found that if they created simulated cerebellar lesions (by removing portions of the simulated cerebellar network), they got results similar to those obtained in experiments on rabbits that had received actual cerebellar lesions.86 On account of the uniformity of this large region of the brain and the relative simplicity of its interneuronal wiring, its input-output transformations are relatively well understood, compared to those of other brain regions. Although the relevant equations still require refinement, this bottom-up simulation has proved quite impressive.

Another Example: Watts's Model of the Auditory Regions

I believe that the way to create a brain-like intelligence is to build a real-time working model system, accurate in sufficient detail to express the essence of each computation that is being performed, and verify its correct operation against measurements of the real system. The model must run in real-time so that we will be forced to deal with inconvenient and complex real-world inputs that we might not otherwise think to present to it. The model must operate at sufficient resolution to be comparable to the real system, so that we build the right intuitions about what information is represented at each stage. Following Mead,87 the model development necessarily begins at the boundaries of the system (i.e., the sensors) where the real system is well-understood, and then can advance into the less-understood regions....In this way, the model can contribute fundamentally to our advancing understanding of the system, rather than simply mirroring the existing understanding. In the context of such great complexity, it is possible that the only practical way to understand the real system is to build a working model, from the sensors inward, building on our newly enabled ability to the model development necessarily begins at the boundaries of the system (i.e., the sensors) where the real system is well-understood, and then can advance into the less-understood regions....In this way, the model can contribute fundamentally to our advancing understanding of the system, rather than simply mirroring the existing understanding. In the context of such great complexity, it is possible that the only practical way to understand the real system is to build a working model, from the sensors inward, building on our newly enabled ability to visualize the complexity of the system visualize the complexity of the system as we advance into it. Such an approach could be called as we advance into it. Such an approach could be called reverse-engineering of the brain reverse-engineering of the brain....Note that I am not advocating a blind copying of structures whose purpose we don't understand, like the legendary Icarus who naively attempted to build wings out of feathers and wax. Rather, I am advocating that we respect the complexity and richness that is already well-understood at low levels, before proceeding to higher levels.-LLOYD WATTS88

A major example of neuromorphic modeling of a region of the brain is the comprehensive replica of a significant portion of the human auditory-processing system developed by Lloyd Watts and his colleagues.89 It is based on neurobiological studies of specific neuron types as well as on information regarding interneuronal connection. The model, which has many of the same properties as human hearing and can locate and identify sounds, has five parallel paths of processing auditory information and includes the actual intermediate representations of this information at each stage of neural processing. Watts has implemented his model as real-time computer software which, though a work in progress, ill.u.s.trates the feasibility of converting neurobiological models and brain connection data into working simulations. The software is not based on reproducing each individual neuron and connection, as is the cerebellum model described above, but rather the transformations performed by each region. It is based on neurobiological studies of specific neuron types as well as on information regarding interneuronal connection. The model, which has many of the same properties as human hearing and can locate and identify sounds, has five parallel paths of processing auditory information and includes the actual intermediate representations of this information at each stage of neural processing. Watts has implemented his model as real-time computer software which, though a work in progress, ill.u.s.trates the feasibility of converting neurobiological models and brain connection data into working simulations. The software is not based on reproducing each individual neuron and connection, as is the cerebellum model described above, but rather the transformations performed by each region.

Watts's software is capable of matching the intricacies that have been revealed in subtle experiments on human hearing and auditory discrimination. Watts has used his model as a preprocessor (front end) in speech-recognition systems and has demonstrated its ability to pick out one speaker from background sounds (the "c.o.c.ktail party effect"). This is an impressive feat of which humans are capable but up until now had not been feasible in automated speech-recognition systems.90 Like human hearing, Watts's cochlea model is endowed with spectral sensitivity (we hear better at certain frequencies), temporal responses (we are sensitive to the timing of sounds, which create the sensation of their spatial locations), masking, nonlinear frequency-dependent amplitude compression (which allows for greater dynamic range-the ability to hear both loud and quiet sounds), gain control (amplification), and other subtle features. The results it obtains are directly verifiable by biological and psychophysical data.

The next segment of the model is the cochlear nucleus, which Yale University professor of neuroscience and neurobiology Gordon M. Shepherd91 has described as "one of the best understood regions of the brain." has described as "one of the best understood regions of the brain."92 Watts's simulation of the cochlear nucleus is based on work by E. Young that describes in detail "the essential cell types responsible for detecting spectral energy, broadband transients, fine tuning in spectral channels, enhancing sensitivity to temporary envelope in spectral channels, and spectral edges and notches, all while adjusting gain for optimum sensitivity within the limited dynamic range of the spiking neural code. Watts's simulation of the cochlear nucleus is based on work by E. Young that describes in detail "the essential cell types responsible for detecting spectral energy, broadband transients, fine tuning in spectral channels, enhancing sensitivity to temporary envelope in spectral channels, and spectral edges and notches, all while adjusting gain for optimum sensitivity within the limited dynamic range of the spiking neural code.93 The Watts model captures many other details, such as the interaural time difference (lTD) computed by the medial superior olive cells.94 It also represents the interaural-level difference (ILD) computed by the lateral superior olive cells and normalizations and adjustments made by the inferior colliculus cells. It also represents the interaural-level difference (ILD) computed by the lateral superior olive cells and normalizations and adjustments made by the inferior colliculus cells.95 The Visual System

We've made enough progress in understanding the coding of visual information that experimental retina implants have been developed and surgically installed in patients.97 However, because of the relative complexity of the visual system, our understanding of the processing of visual information lags behind our knowledge of the auditory regions. We have preliminary models of the transformations performed by two visual areas (called VI and MT), although not at the individual neuron level. There are thirty-six other visual areas, and we will need to be able to scan these deeper regions at very high resolution or place precise sensors to ascertain their functions. However, because of the relative complexity of the visual system, our understanding of the processing of visual information lags behind our knowledge of the auditory regions. We have preliminary models of the transformations performed by two visual areas (called VI and MT), although not at the individual neuron level. There are thirty-six other visual areas, and we will need to be able to scan these deeper regions at very high resolution or place precise sensors to ascertain their

Please click Like and leave more comments to support and keep us alive.

RECENTLY UPDATED MANGA

The Singularity Is Near_ When Humans Transcend Biology Part 7 summary

You're reading The Singularity Is Near_ When Humans Transcend Biology. This manga has been translated by Updating. Author(s): Ray Kurzweil. Already has 436 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

BestLightNovel.com is a most smartest website for reading manga online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to BestLightNovel.com