Human. - BestLightNovel.com
You’re reading novel Human. Part 9 online at BestLightNovel.com. Please use the follow button to get notification about the latest chapter next time when you visit BestLightNovel.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy
This is sounding suspiciously familiar. I submit that it is the left-brain interpreter that is coming up with the theory, the narrative, and the self-image, taking the information from various inputs, from the "neuronal works.p.a.ce," and from the knowledge structures, and gluing it together, thus creating the self, the autobiography, out of the chaos of input.
Do these knowledge structures about self differ from other knowledge structures? Some neuropsychologists think not much. James Gilligan and Martha Farah at the University of Pennsylvania think that most structures are probably not distinct from processes involving persons in general.49 This actually makes a lot of sense in terms of brain economy. I propose that the left-brain interpreter is uniquely human. It can take information from a wide variety of sources, the same sources that are available to other animals, but it integrates that information in a unique way to create our self-conscious self. There has been a phase s.h.i.+ft. The degree to which humans are self-aware is unique.
However, there may be some specialized knowledge structures that we will consider that give our interpreter an edge. First we are going to learn a bit about memory, and then we are going back to patients with lesions that affect the sense of self, to see if we can learn anything more. Remember that the interpreter can use only information that it has available.
Consider the trip to the Cote d'Azur. In proposing such a trip, you are using information that you know about yourself that indicates that you will enjoy the trip. Where is this information coming from? How about your travel partner? Is the same information available about another person, and is it stored as memory in the same place? One fascinating aspect of memory that was noticed several years ago was that if you asked a person if a certain word was self-descriptive, that word would later be remembered better than if you asked about the word in a more general sense. For instance a person would remember the word kind better if he had been asked, "Are you kind?" than if he had been asked, "What does kind mean?"50 This led researchers to believe that self-knowledge might be stored in a different manner than other information.
Memory stores two basic types of information: procedural and declarative.51 Procedural memory allows one to retain perceptual, motor, and cognitive skills and express them nonconsciously, such as driving a car, riding a bicycle, tying a shoelace, braiding one's hair, and, eventually, playing the piano. Declarative memory is made up of facts and beliefs about the world, such as, the desert is hot in the summer, and orange blossoms are fragrant. Neuroscientist Endel Tulving, professor emeritus at the University of Toronto, proposes that there are two types of declarative memory: semantic and episodic.51, 52, 53 Semantic memory is generic: "Just the facts ma'am, just the facts," not necessarily a.s.sociated with the source or where or when they were learned. Cairo is the capital of Egypt, 12 squared is 144, and most wine is made from grapes. Semantic memory makes no subjective reference to the self, although it can have facts about the self: "I have green eyes. I was born in Timbuctoo." Semantic memory provides knowledge from the point of view of an observer of the world rather than that of a partic.i.p.ant. Episodic memory retains events that were experienced by the self at a particular place and time. "I had a great time at the party last night, and the food was delicious!"
Tulving is continually sculpting the definition of episodic memory as more is known about it. Because he considers episodic memory uniquely human, and since it will be important in our discussion of animal consciousness later, I will quote his most recent sculpting.
Episodic memory is a recently evolved, late developing, and early deteriorating brain/mind (neurocognitive) memory system. It is oriented to the past, more vulnerable than other memory systems to neuronal dysfunction, and probably unique to humans. It makes possible mental time travel through subjective time-past, present, and future. This mental time travel allows one, as an "owner" of episodic memory ("self"), through the medium of autonoetic awareness,* to remember one's own previous "thought-about" experiences, as well as to "think about" one's own possible future experiences. The operations of episodic memory require, but go beyond the semantic memory system. Retrieving information from episodic memory ("remembering") requires the establishment and maintenance of a special mental set, dubbed episodic "retrieval mode." The neural components of episodic memory comprise a widely distributed network of cortical and subcortical brain regions that overlap with and extend beyond the networks subserving other memory systems. The essence of episodic memory lies in the conjunction of three concepts-self, autonoetic awareness, and subjective time.54 By definition, episodic memory always includes the self as the agent or recipient of some action. When a person-let's call her Sarah-remembers an event, she reexperiences it with the awareness that it happened to her: "I remember seeing the Stones last year. They were great!" The major distinction between episodic and semantic memory is not the type of information they encode, but the subjective experience that accompanies the operations of the systems at encoding and retrieval. Sarah could say, "I saw the Stones last year," as a fact, even if she was too drunk to actually remember having done so. Episodic memory is rooted in autonoetic awareness and in the belief that the self having the experience now is the same self that had it originally. Semantic memory requires only noetic awareness, which is experienced when one thinks objectively about something that one knows. Tulving emphasizes that it is "possible to be noetically aware of one's self, including body position in s.p.a.ce, traits, and characteristics, and even autobiographical facts that are not accompanied by a feeling of re-experiencing or reliving the past."
It is looking as though semantic memory appears earlier in development than episodic memory. Although very young children appear to be able to remember facts and can think about things that are not physically present (that is, they have semantic memory), it is difficult to determine whether they can consciously recollect the past in a way that engages a developed episodic system. Babies who are two years old have been able to demonstrate recall of things that they had witnessed at age thirteen months.55 However, several pieces of evidence support the idea that it isn't until children are at least eighteen months old that they actually include themselves as part of the memory, although this ability tends to be more reliably present in three- to four-year-olds.56, 57 In fact, it appears that children less than four years old have no knowledge of time scales,58, 59 which is why it is never a good idea to tell them that you will be going to Disneyland in two weeks. This later-developing episodic memory explains why there is scant autobiographical memory from our very early years.
Evolutionary psychology theory, however, is not going to be happy with only episodic memory doing all the autobiographical work. It would take way too long when you need "quick and dirty" answers. If our ancestor was presented with the question of whether to chase a prey or not, he needed a fast answer about his capabilities. He couldn't wait around while he remembered every gazelle and warthog that he had ever run after and whether his speed and endurance matched theirs, and calculate the probabilities; he needed precomputed and stored answers: "I am fast, strong, and have endurance. Go for it!" or "I am slow, wimpy, and tire easily, and besides that, warthogs are gross. I'll just tell Cronos where it is."
Well, guess what? The semantic system, that "Just the facts, ma'am" system, appears to have a subsystem for personality trait summaries. Stan Klein and Judith Loftus did some tests to tease out whether personality trait summaries were stored separately from episodic memory. Subjects were given pairs of tasks, the first serving as a prime for the second. The first task varied among answering if a trait was self-descriptive ("Are you generous?"), doing a filler task ("Define the word table"), or a control task (which was either looking at a blank screen or defining a trait word: "What does selfish mean?"). Next, if the first task had been answering whether a trait was self-descriptive, the second task was to remember an episode in which the subject had displayed that trait. The experimenters measured the amount of time it took to come up with the remembered episode. If the subjects had seen only a blank screen, they were presented with a new trait and asked to come up with an episode in which they had displayed that trait. The researchers reasoned that if subjects had used episodic memory to come up with an answer about whether a trait was self-descriptive (yes, I'm generous), then they should be faster at describing an episode when they displayed that trait, because they would already have thought of it to answer the first question. However, this isn't what happened. It took subjects just as long to remember an episode of a trait that they had already been asked about as they did to remember an episode of a different trait of which no previous mention had been made. The experimenters concluded that people can answer questions about their personality traits by accessing trait summaries without invoking memories of specific episodes.60 Other research Klein and Loftus have done has shown that episodic memory is called in only when there is no trait summary available-for instance, when experience is extremely limited in regard to a specific trait. This also holds true when making judgments of other people. Episodic memory is called upon only when no trait summary exists.61 One patient with total amnesia who could not remember a single thing he had done or experienced in his life has been extensively studied. Not only does he have no episodic memory, but his semantic memory has also been partially lost. Although he could not accurately describe the personality of his daughter, he could accurately describe his own personality. He knew some facts about his life, but was missing others. He knew some well-known facts about history, but not others. This patient's pattern of deficits strongly suggests that there is specific memory architecture for storage and retrieval of self personality traits.
The general trend from studies that have been done on self-referential traits points to left-hemisphere involvement.62 How about the autobiographical episodic memories? Can they be located? The answer to this question has been elusive; some evidence points to one side, some to the other. The picture that is emerging is that aspects of self-knowledge are distributed throughout the cortex, a little here, a little there. There is some evidence that the frontal regions of the left hemisphere play a pivotal role in setting the goal for retrieval and reconstruction of autobiographical knowledge.63, 64, 65 Do split-brain patients help us out at all with locating where self processing is located? Severing the corpus callosum in humans has raised a fundamental question about the nature of the self: Does each disconnected half brain have its own sense of self? Could it be that each hemisphere has its own point of view, its own self-referential system that is truly separate and different from that of the other hemisphere?66 Early observations of split-brain patients indicated that this could be the case.67 There were moments when one hemisphere seemed to be belligerent while the other was calm. There were times when the left hand (controlled by the right hemisphere) behaved playfully with an object that was held out of view while the left hemisphere seemed perplexed about why. However, of the dozens of instances recorded over the years, none allowed for a clear-cut claim that each hemisphere has a full sense of self. Although it has been difficult to study the self per se, there have been intriguing observations about perceptual and cognitive processing relating to the self.
Research has revealed much about the processes and brain structures that support the recognition of familiar others (for example, friends, family members, and movie stars). Both functional imaging and patient studies show that face recognition is typically reliant on structures in the right cerebral hemisphere. For example, we have shown that split-brain patients perform significantly better on tests of face recognition when familiar faces are presented to the right hemisphere rather than the left hemisphere.68 Similarly, damage to specific cortical areas in the right hemisphere impairs the ability to recognize others.69, 70, 71, 72, 73 But is the right hemisphere similarly specialized for self-recognition? Although some support has been garnered for this idea,74, 75, 76 the available evidence is inconclusive. Neuroimaging studies have revealed that highly self-relevant material (for example, autobiographical memories) activates a range of cortical networks in the left hemisphere that could, potentially, support self-recognition and a host of related cognitive functions.77, 78, 79 Therefore, whereas the recognition of familiar others relies primarily on structures in the right hemisphere, self-recognition might be supported by additional left-lateralized cognitive processes. To investigate this possibility, David Turk and colleagues a.s.sessed face recognition of self versus a familiar other in a split-brain patient.80 Patient J.W. viewed a series of facial photographs that ranged from 0 percent to 100 percent self-images. A photograph of me (M.G.), a longtime a.s.sociate of J.W. (that is, a highly familiar other), was used to represent 0 percent self, and a photograph of J.W. was used to represent 100 percent self. Nine additional images were generated using computer-morphing software, each image representing a 10 percent incremental s.h.i.+ft from M.G. to J.W. In one condition (self-recognition), J.W. was asked to indicate whether the presented image was he; in the other condition (familiar-other recognition), he was asked to indicate whether the image was M.G. The only difference across the two conditions was the judgment that was required (Is it me? versus Is it Mike?).
The results revealed a double dissociation in J.W.'s face-recognition performance. His left hemisphere showed a bias toward recognizing morphed faces as self, whereas his right hemisphere showed the opposite pattern; that is, biased recognition in favor of a familiar other. In short, the left hemisphere is quick to detect a partial self-image, even one that is only slightly reminiscent of the self, whereas the right brain needs an essentially full and complete picture of the self before it recognizes the image as such. In the left hemisphere, there was, essentially, a linear relations.h.i.+p between the amount of self in the image and the probability of detecting self. The right hemisphere, on the other hand, did not recognize the image as self until the image contained more than 80 percent self. The finding that the left hemisphere requires less self in the image for self-recognition might reflect a key role of the left hemisphere in the retrieval of self-knowledge, or might depend on the left-brain interpreter taking whatever information is available and making a judgment call on the basis of that information. This also goes along with the right brain's being more accurate and maximizing information, not forming a hypothesis-"Wait a minute, that is not me. That nose is not quite right," while the left brain will frequency-match and hypothesize, "Yep that's me!"
Overall, the data indicate that a sense of self arises out of distributed networks in both hemispheres.80, 81 It is likely that both hemispheres have processing specializations that contribute to a sense of self, and that sense of self is constructed by the left-hemisphere interpreter on the basis of the input from these distributed networks.
ANIMALS AND CONSCIOUSNESS: TO WHAT DEGREE?.
This is the question that intrigues many animal researchers. The answer has been elusive. If only they could talk, they would be so much easier to study. To paraphrase Steve Martin,* "Boy, those animals! They don't have a different word for anything!" As I mentioned earlier, there are many levels of consciousness, defined differently by different researchers. It is well accepted that mammals are conscious to the here and now, but the debate begins with the degree of extended consciousness that they possess. The problem is, how can one design an experiment that could demonstrate degrees of consciousness in a nonverbal animal? Come up with the answer to that problem and you have yourself a big fat PhD dissertation.
In order to determine degrees of extended consciousness an animal possesses, one needs to know what is considered to be extended consciousness. The basic step that is made into extended consciousness is becoming self-aware to some degree. Self-awareness means being the object of one's own attention. Various scientists describe this as ranging from merely being aware of the products of self-perception or environmental stimuli ("I hear a noise," "I feel a thorn") to the ability of conceptualizing information about the self, which needs to be determined abstractly ("I am hip.")82 This has led animal researchers to concentrate in two areas: animal self-awareness and animal metacognition (thinking about thinking).
Animal Self-Awareness.
In discussing animal self-awareness, Marc Hauser makes the point that when it would pay, in evolutionary terms, to treat some members of your own species differently from others is when the discrimination leads to fitness payoffs. Thus it may pay to be able to recognize the opposite s.e.x, or the age of another individual (if they were s.e.xually mature...no use wasting time on courting an immature individual), or your own mother, or kin versus non-kin, or other members of your own pack or hive. He tells us, "All social, s.e.xually reproducing organisms seem to be equipped with neural machinery for discriminating males from females, juveniles from adults, and relatives from nonrelatives."83 Many different systems have evolved to help identify kin from non-kin. One system that many birds have is imprinting. The first individual they see is Ma. This usually works, but glitches in this system have been the basis of many cartoons. Sweat bees and paper wasps recognize their colony by odor, ground squirrels also use odor for recognition,84 and Mexican free-tail bats recognize their own pup out of thousands through vocal and olfactory communication. These recognition systems use some sensory perception to clue recognition, a match to a specified neural template, but they do not require any self-awareness, any "knowing of self" to work.
Trying to design a test to demonstrate self-awareness in animals has proven difficult. In the past it has been approached from two angles. One is mirror self-recognition and the other is through imitation. Gordon Gallup approached the problem by developing a mirror test, in which he anesthetized chimpanzees, put a red mark on one ear and eyebrow, and then, after they had recovered from the anesthesia, presented them with a full-length mirror. Prior to exposure to the mirror, the chimps didn't touch the red marks, but once the mirror was presented, they did. After being left with the mirror, a while later they began to look at visibly inaccessible parts of their bodies.85 Not all chimps exhibit mirror self-recognition (MSR), however.86 Later experiments have shown that MSR develops in some, but not all, chimps around p.u.b.erty, but is present to a lesser degree in older chimps,87 and in fact may deteriorate over time.88 Orangutans also show MSR, but only a rare gorilla possesses it.89, 90 Two dolphins91 (with a few questions still to be addressed concerning differences in testing procedures92) and one out of the five Asian elephants that have been tested in two different studies have also pa.s.sed the mark test.93, 94 That's it, folks.
No other animal species has yet been found that exhibits MSR. This is why your dog isn't all that interested when you try to get him to look in the mirror. Children have MSR and pa.s.s the mark test by age two.95 Gallup has suggested that mirror self-recognition implies the presence of a self-concept and self-awareness.96 This sounds like a reasonable test until Robert Mitch.e.l.l, a psychologist at Eastern Kentucky University, chimes in by asking, What degree of self-awareness is demonstrated by recognizing oneself in the mirror? Mitch.e.l.l points out that MSR requires only an awareness of the body, rather than any abstract concept of self.97 There is no need to invoke anything more than matching sensation to visual perception; att.i.tudes, values, intentions, emotion, and episodic memory are not required to recognize one's body in the mirror. A chimp looks down and sees his arm and wills it to move. It moves. He sees it move in the mirror. No grand concept of self is needed. Mitch.e.l.l divides the self into three levels: The implicit self, a point of view that experiences, acts, and in the case of mammals and birds, has emotions and feelings. A hamster is hungry, and can experience eating and can like eating, but it probably doesn't know that it likes to eat.
The self built upon kinesthetic visual matching, which leads to MSR, the first step to imitation, pretense, planning, self-conscious emotion, and imaginative experience.
The self built on symbols, language, and artifacts, which provides support for shared cultural beliefs, social norms, inner speech, dissociation, and evaluation by others, as well as self-evaluation.98 Another problem with the MSR test is that some patients with prosopagnosia (inability to recognize faces) cannot recognize themselves in a mirror. They think they are seeing someone else. However, they do have a sense of self, which is why the problem is so distressing to them. The absence of MSR, then, doesn't necessarily mean the absence of self-awareness. So although the MSR test can indicate a degree of self-awareness, it is of limited value in evaluating just how self-aware an animal is. It does not answer the question of whether an animal is aware only of its visible self or if it is aware of un.o.bservable features. Povinelli and Cant have suggested that a sense of physical self-awareness in nonhuman primates may have evolved in large arboreal primates to meet the challenges of crossing between gaps in trees, where their weight was an issue in selecting their route.99 Knowing that they had a body and that only certain structures could support it provided a survival advantage.
If one can imitate another's actions, then one is capable of distinguis.h.i.+ng between one's own actions and the other's. The ability to imitate is used as evidence for self-recognition in developmental studies of children. We have seen in chapter 5 that there is spa.r.s.e evidence for imitation in the animal world. Josep Call has summarized the research, concluding that most of the evidence in primates points to the ability to reproduce the result of an action, not imitate the action itself.100 Tulving's suggestion that episodic memory-which includes an awareness of self in its definition and the ability to project oneself into the past or future-is uniquely human has also been a field of focus to identify self-awareness. If an animal can demonstrate its capacity for episodic memory, then it must have a concept of self. Tulving outlines the challenges and pitfalls of identifying episodic memory in animals. Much research on animal memory has been concerned with perceptual memory, which doesn't require declarative memory. Even when some tests require more than perceptual memory, they can be successfully performed using declarative semantic memory without episodic memory.
Many previous studies have a.s.sumed that animals have episodic memory when they demonstrated some behaviors. These studies, however, did not separate memory for facts, which would be semantic memory, from memory for events. Episodic memory tests require the subject to answer what, where, when (the when has been lacking in most tests), and then one final question that is the most difficult to study. Is the animal remembering the experience with an attached emotional component, or does it merely know that it happened? (This is the difference between knowing when you born versus remembering the experience of your birth, or knowing that one eats every day versus remembering the experience of a particular meal.) The problem has been figuring out how to approach that experiential aspect. In humans, we can just ask, although even this does not always give accurate information, because we have the know-it-all interpreter providing the answers. Animal studies have had to focus on behavior criteria. It has taken years to understand that much of what we do is not under conscious control, even though we thought that it was, so attributing conscious action to animals is also going to be tempting but needs to be rigorously evaluated.
Povinelli and his colleagues did an interesting study with children that revealed a developmental difference in semantic and episodic memory.101 First he un.o.btrusively put stickers on the foreheads of two-, three-, and four-year-olds while they were playing a game. Three minutes later, he showed them either a video of this action or a Polaroid picture of it to find out whether what a child learned about a past experience could be a.s.similated into the present. About 75 percent of the four-year-olds had immediately reached up and pulled the sticker off, while none of the two-year-olds and only 25 percent of the three-year-olds had done so. However, when he handed the two- and three-year-olds a mirror and they glimpsed themselves, they all immediately pulled off the stickers. The researchers suggested that the difference in reaction to live versus delayed feedback in the different age groups indicated a developmental lag between the development of a self-concept and a self-concept that includes temporal continuity. Specifically, children may not a.s.sume that their currently experienced state is determined by previous states. The two- to three-year-olds were not yet able to project themselves into the past, not yet able to time-travel. This is further indication that possessing MSR is not evidence for the possession of episodic memory and full self-awareness, and that semantic and episodic memory develop separately.
Thomas Suddendorf, a psychologist at the University of Queensland, Australia, and Michael Corballis, from the University of Auckland, New Zealand, make the interesting point that in order to have episodic memory and to time-travel, many cognitive abilities are involved. It is not just a single module doing its thing. Thus in order to establish if episodic memory is present in other species, they need to possess all the cognitive abilities required. What are these? Beyond some level of self-awareness, they must have an imagination able to reconstruct the order of events, must be able to metarepresent their knowledge (to be able to think about thinking), and must be able to dissociate from their current mental state (I am not hungry now, but I may be in the future). Episodic memory also requires that an animal understand the perception-knowledge contingency, that is, that seeing is knowing: I know that because Susan has her eyes covered, she cannot see me; or I know that because Ann is not in the room, she did not see Sally move the ball to a new place. It also requires the ability to attribute past mental states to one's earlier self: I used to think the candy was in the blue box, but now I know that it is in the red one. These systems aren't up and running in children until age four. Included in these cognitive abilities is a concept derived from the Bischof-Kohler hypothesis, which states, "Animals other than humans cannot antic.i.p.ate future needs or drive states and are therefore bound to a present that is defined by their current motivational state."102 That means that if an animal is not hungry now, it is unable to plan for actions in the near future that involve eating; it cannot uncouple or dissociate from its current motivation (to lie down, perhaps) to plan for something that would be the result of a different motivational state.
The idea that "animals may be stuck in time," as suggested by a comprehensive review of animal memory studies done by William Roberts,103 a psychologist at the University of Western Ontario, seems a little farfetched when you think about how your dog "knows" it is 7:00 P.M. and time for his walk, or waits at the door for you to get home from work every day at 5:30. Or how about all those dang birds that have the intelligence to head south for the winter while you are crazy enough to stay in Buffalo, or bears eating their fill all summer and holing up for the winter? They seem to understand time and are planning ahead. These abilities turn out to be regulated by internal cues that have to do with circadian rhythms rather than a concept of time. A bear that hibernates for the first time cannot be planning ahead for the long cold winter: It doesn't even know that there are long cold winters.
The Search for Episodic Memory in Animals.
Some of the most tantalizing sets of animal studies looking for episodic memory have been done by Nicola Clayton and Anthony d.i.c.kinson, professors at the University of Cambridge, studying scrub jays.104, 105, 106, 107, 108 What was different about their studies was that they designed them to determine if the jays were answering the what, where, and when questions about multiple episodes that were unique in time and flexibly recalled. The jays more recently are even answering the who question. Thus they are using multiple components of an event, not just a single bit of information.
You may have been inadvertently using a misguided epithet when you referred to the annoying person on the phone or in traffic as a birdbrain. While most of us have been going about our daily lives, working, enjoying our vacations, and worrying about our taxes, there has been a revolution going on in the study of bird brains. I am not kidding! There has been a major change in the understanding of bird-brain anatomy and their neural connections, which has led to new ideas about the structure and function of parts of the avian brain.109 While birds lack the neocortical structure of mammals, they have many brain structures that serve the same purpose as mammalian brain structures, and have similar thalamic-cortical loop connections.110 This has led to the realization that some species of birds have a lot more going on upstairs than had previously been thought. The presence of loop connections similar to the loop connections proposed to allow extended consciousness in humans leads to the hypothesis that they are performing the same operation in birds and providing them with some level of extended consciousness. This actually should come as no surprise to anyone who has spent much time watching ravens, crows, jays, or some species of parrots.
So, back to the scrub jays: Clayton, a former colleague of mine when we were both at the University of California, Davis, found that Florida scrub jays (Aphelocoma coerulescens) will cache different types of food in different places, at different times, and will selectively retrieve food that degrades, and eat that before retrieving and eating food that stores well. Her birds fulfill the when, what, and where questions, and are flexible. What is still not answered is if it is semantic knowledge or experiential. All the jay really is demonstrating is that it can update its knowledge, as psychologist Bennett Schwartz maintains; it is like the memory of where one's keys are. Clayton calls it episode-like memory because of this problem.111 Another tantalizing finding is that jays adjust their caching strategies to minimize potential stealing by other birds. If an individual jay (let's call him Buzz) had stolen food from another's cache in the past, and if while Buzz was caching his food he was observed by another jay, then after that other bird was removed, Buzz would recache his food in private. Not only that, Buzz also keeps track of who is observing him cache. If it is a dominant bird, he is more likely to rehide his food in private than if it is his mate or a subordinate bird. He is also less likely to recache his food if a new jay appears who had not watched him hide food previously.112 However, if Buzz had never stolen food from another jay in the past, then he would not recache his food even though his caching had been observed. These results indicate that recaching depended on previous experience as a thief.113 Walking on the wild side, Clayton and her coworkers suggest that maybe these scrub jays are showing evidence of knowing what another jay knows: theory of mind.
You may recall from chapter 2 the studies that revealed planning behavior in orangutans and bon.o.bos, done by Mulcahy and Call.114 These are the best evidence so far that imaginary time travel is not unique to humans. These were the studies that demonstrated future planning of tool use when the subject carried a tool from one room to another for use up to fourteen hours later. These authors concluded: Because traditional learning mechanisms or certain biological predispositions appear insufficient to explain our current results, we propose that they represent a genuine case of future planning. Subjects executed a response (tool transport) that had not been reinforced during training, in the absence of the apparatus or the reward, that produced no consequences or reduced any present needs but was crucial to meet future ones. The presence of future planning in both bon.o.bos and orangutans suggests that its precursors may have evolved before 14 Ma* in the great apes. Together with recent evidence from scrub jays our results suggest that future planning is not a uniquely human ability.
Suddendorf agrees that these findings are very suggestive, but points out that the researchers did not measure or control subjects' motivational states. He thinks, "Although the data suggest antic.i.p.ation of the future need for a tool, they do not necessarily imply antic.i.p.ation of a future state of mind."115 It seems that the quest for nonhuman episodic memory is still afoot, and designing tests that can demonstrate it is the current stumbling block, although they are slowly being improved upon.
Do Animals Think About What They Know?
While most research on animals has been concentrating on the theory-of-mind question and what an animal knows about another's knowledge, little has been done on what an animal knows about its own knowledge. A newer approach in looking for self-reflective consciousness has been to look for metacognition, or thinking about thinking, which is awareness of one's own mental operations. Do animals think about what they know? This is another difficult question to study.
One approach has been through the testing of uncertainty. Humans know when they don't know something, or when they are unsure of something. J. David Smith, a psychologist at the State University of New York at Buffalo, thought that designing a test that included uncertainty might demonstrate metacognition in animals. He designed a visual density test in which rhesus monkeys and humans used a joystick to move a cursor to one of three objects on a computer screen.116 They were to judge if a box was densely lit (exactly 2,950 pixels) or spa.r.s.ely lit if it had fewer. They could pick the "dense" response, the "spa.r.s.e" response, or the "uncertain" response, which was represented by a star on the screen. If they picked the star, they went automatically to a new guaranteed-win trial. The difficulty of making the discrimination gradually increased, until most faltered at about the 2,600-pixel level. Interestingly, the monkeys' and the humans' responses were much the same. After the test, the humans verbally reported that when they had guessed that the screen was either spa.r.s.e or dense, their answers were dependent on the visual stimulus; however when they chose the uncertain response, it was because they had personal feelings of uncertainty and doubt: "I was uncertain," "I didn't know," or "I couldn't tell." Smith concluded that the "uncertain" response in humans might reveal not only metacognitive monitoring but also a reflexive awareness of the self as cognitive monitor.
A similar study has been done with a male bottlenose dolphin using an auditory discrimination test. The dolphin had to press a high paddle for the high-pitched tone (2100 Hz), a low paddle for any other tone, and a third paddle if he was uncertain. This paddle was picked when the tone approached 2085 Hz or greater. The dolphin, when responding with certainty, also swam quickly to the paddle; however, when he was not, he swam more slowly and wavered between the paddles.117 The demonstration that animals had an uncertainty response and used it in situations similar to those when humans demonstrated uncertainty was interpreted to mean that monkeys and dolphins have metacognition.
Reactions to this suggestion have been varied, with some agreement and some skepticism.118 The problem is in the original a.s.sumption that the humans were thinking about thinking when they made their uncertain response. I don't think metacognition came into the picture until they were asked about their response. That is when the left-hemisphere interpreter revved up to explain their response. The choice was powered by emotional responses to the stimuli, the old approachdon't approach response. The problem comes from the a.s.sumption that humans were using higher cognition when they may not have been. Philosopher Derek Browne, from the University of Canterbury, Christchurch, New Zealand, has a similar take in discussing the results of the dolphin study. He suggests that it isn't until the postexperimental probe (or question) is applied that human subjects apply psychological concepts to their own earlier performances.119 The latest tests have been done with rats by Allison Foote and Jonathon Crystal at the University of Georgia. First their rats heard either a short sound or a long one. Next, for a reward, the rats had to pick whether the recent noise had been short or long. This was easy unless they were given sounds that were intermediate in length. If the rat was correct, it got a big food reward, and if it was wrong, zilch. However, before it was given the choice, the rat could opt out of the test and get a small food reward. Sometimes, however, it was not allowed to opt out but was forced to make a choice. Two interesting things happened. The more difficult it was to distinguish the sounds, the more frequently the rats opted out of the test if they could. And second, as you would expect, test accuracy declined as the difficulty of the time-discrimination task increased, but this decline in accuracy was greater when rats were forced to do the test. The findings suggest that rats could a.s.sess whether they were going to pa.s.s a test on a trial-by-trial basis.120 They knew what they knew about the length of the sound.
Josep Call has approached metacognition from a different angle. He has provided his subjects with incomplete information to solve a problem, in order to find out whether they would seek additional information: Would they know that they did not know enough to solve a problem? He tested orangutans, gorillas, chimps, bon.o.bos-and children two and one-half years old.121, 122 He had two opaque tubes. He put a treat in one, either while the subject could see him do it or while he was hidden behind a screen. Then he let the subject pick the tube they wanted, either right away or with a time delay. The question was, when they didn't have enough information as to which tube had the treat inside, would they seek more information before choosing a tube? They did! In fact, in many of the trials, after the apes looked in one tube and saw that it was empty, they chose the second tube without checking it out first. They inferred that the other tube had the treat. They were better at this than the children. Preventing the apes from immediately choosing increased the looking behavior and obviously their success. However, this did not change the behavior of the children. Call suggested that it "is likely that apes were more successful in the delayed situation because they did not have to inhibit the powerful responses elicited by the prospect of getting the reward."122 As we have learned before, inhibition is not high on the list of chimpanzee behavioral traits.
Call is very cautious about his conclusions as to what this study reveals about the cognition of great apes and whether metacognition is involved. The debate is whether they are using a fixed hardwired rule, such as "Search until you find food," or perhaps a fixed rule learned from a specific experience, like "Bend down in the presence of a barrier," or whether they are using a flexible rule based on knowledge acc.u.mulation created through multiple experiences, none of which were the same as the one that is now being presented, such as "When my visual access is blocked, then do something appropriate to gain visual access." Call is inclined toward the latter explanation in his current pursuit of this question.
Can anatomy help us at all? Maybe. If we knew exactly what the neural correlates of human consciousness were, which we don't, then we could see if their equivalent exists in other species. It appears that long-range connection loops are necessary. As I said before, these have been identified in bird brains, and also in other primates. Although much more work in comparative anatomy still needs to be done, we have a problem when we compare anatomy. It is not the same thing as comparing function. There may be more than one way to skin a cat-that is, there may be neural solutions or routes to consciousness other than those in the human brain, which could result in different types of consciousness.
So, currently we are left with Antonio Damasio's conclusions. Some animals have some degrees of extended consciousness, but what animals possess it and to what extent is still unknown. There appears to be some degree of body self-awareness in a very limited number of species, but even as new ways for testing such abilities are designed, the many brains that evaluate the tests continue to poke holes in their validity and also their interpretation. Current evidence suggests that animals do not have episodic memory and do not time-travel, but we are going to have to keep our eyes on Nicola Clayton and her scrub jays. The latest studies looking for evidence of animal metacognition in rats are tantalizing but still need refining before definite conclusions can be drawn.
CONCLUSION.
I was recently asked by a Time magazine reporter, "If we could build a robot or an android that duplicated the processes behind human consciousness, would it actually be conscious?" It is a provocative question and it is one that persists, especially as one tries to capture the differences between the spheres of consciousness of animals and also those that exist between separated left and right brains. Much of what I have written here about bisected brains has appeared before. Yet, I find that the way we all nuance our understanding of complex topics is ever changing, since none of us hold the true answers in our hip pocket. I found myself answering the reporter with what I feel is a new twist.
Underlying this question is the a.s.sumption that consciousness reflects some kind of process that brings all of our zillions of thoughts into a special energy and reality called personal or phenomenal consciousness. That is not how it works. Consciousness is an emergent property and not a process in and of itself. When one tastes salt, for example, the consciousness of taste is an emergent property of the sensory system, not of the combination of elements that make up table salt. Our cognitive capacities, memories, dreams, and so on reflect distributed processes throughout the brain, and each of those ent.i.ties produces its own emergent states of consciousness.
In closing, remember this one fact. A split-brain patient, a human who has had the two halves of his brain disconnected from each other, does not find one side of the brain missing the other. The left brain has lost all consciousness about the mental processes managed by the right brain, and vice versa. It is just as with aging or with focal neurologic disease. We don't miss what we no longer have access to. The emergent conscious state arises out of each capacity and probably through neural circuits local to the capacity in question. If they are disconnected or damaged, there is no underlying circuitry from which the emergent property arises.
The thousands or millions of conscious moments that we each have reflect one of our networks being "up for duty." These networks are all over the place, not in one specific location. When one finishes, the next one pops up. The pipe organlike device plays its tune all day long. What makes emergent human consciousness so vibrant is that our pipe organ has lots of tunes to play, whereas the rat's has few. And the more we know, the richer the concert.
Part 4.
BEYOND CURRENT CONSTRAINTS.
Chapter 9.
WHO NEEDS FLESH?.
The principles now being discovered at work in the brain may provide, in the future, machines even more powerful than those we can at present foresee.
-J. Z. Young, Doubt and Certainty in Science: A Biologist's Reflections on the Brain, 1960.
Men ought to know that from the brain, and from the brain only, arise our pleasures, joy, laughter and jests, as well as our sorrows, pains, griefs, and tears.
-Hippocrates, c. 400 B.C.
I AM A FYBORG, AND SO ARE YOU. FYBORGS, OR FUNCTIONAL cyborgs, are biological organisms functionally supplemented with technological extensions.1 For instance, shoes. Wearing shoes has not been a problem for most people. In fact, it has solved many problems, such as walking on gravelly surfaces, avoiding thorns in the foot, walking at high noon across an asphalt parking lot on a June day in Phoenix, or a January day in Duluth, and shoes have prevented over one million stubbed toes in the last month. In general, no one is going to get upset about the existence and use of shoes. Man's ingenuity came up with a tool to make life easier and more pleasant. After the inventors and engineers were done with the concept, the basic design, and product development, the aesthetics department took over, cranked it around a bit, and came up with high heels. Perhaps not so utilitarian, but they serve a different, more specific purpose: to get across that parking lot looking s.e.xy.
Wearing clothes has also been well accepted. They provide protection both from the cold and the sun, from thorns and brush, and can cover up years' worth of unsightly intake errors. Watches, a handy tool, are used by quite a few people without any complaint, and are now usually run by a small computer worn on the wrist. Eyegla.s.ses and contact lenses are common. There was no big revolution when those were introduced. Cell phones seem to be surgically attached to the palms of teenagers and, for that matter, most everyone else. Fas.h.i.+oning tools that make life easier is what humans have always done. For thousands of years, we humans have been fyborgs, a term coined by Alexander Chislenko, who was an artificial-intelligence theorist, researcher, and software designer for various private companies and MIT. The first caveman that slapped a piece of animal hide across the bottom of his foot and refused to leave home without it became a fyborg to a limited degree. Chislenko devised a self-test for functional cyborgization: Are you dependent on technology to the extent that you could not survive without it?
Would you reject a lifestyle free of any technology even if you could endure it?
Would you feel embarra.s.sed and "dehumanized" if somebody removed your artificial covers (clothing) and exposed your natural biological body in public?
Do you consider your bank deposits a more important personal resource storage system than your fat deposits?
Do you identify yourself and judge other people more by possessions, ability to manipulate tools and positions in the technological and social systems than primary biological features?
Do you spend more time thinking about-and discussing-your external "possessions" and "accessories" than your internal "parts"?1 I don't know about you, but I would much rather hear about my friend's new Maserati than his liver. Call me a fyborg any day.
Cyborgs, on the other hand, have a physical integration of biological and technological structures. And we now have a few in our midst. Going beyond the manufacture of tools, humans have gotten into the business of aftermarket body parts. Want to upgrade that hip or knee? Hop up on this table. Lost an arm? Let's see what we can do to help you out. But things start getting a little bit dicier when we get to the world of implants. Replacement hips and knees are OK, but start a discussion about breast implants, and you may end up with a lively or heated debate about a silicon upgrade. Enhancement gets the ire up in some people. Why is that? What is wrong with a body upgrade?
We get into even choppier waters when we start talking about neural implants. Some people fear that tinkering with the brain by use of neural prostheses may threaten personal ident.i.ty. What is a neural prosthesis? It's a device implanted to restore a lost or altered neural function. It may be either on the input side (sensory input coming into the brain) or the output side (translating neuronal signals into actions). Currently the most successful neural implant has been used to restore auditory sensory perception: the cochlear implant.
Until recently, "artifacts" or tools that man has created have been directed to the external world. More recently, therapeutic implants-such as artificial joints, cardiac pacemakers, drugs, and physical enhancements-have been used either below the neck or for facial cosmetic purposes (that would include hair transplants). Today, we are using therapeutic implants above the neck. We are using them in the brain. We also are using therapeutic medications that affect the brain to treat mental illness, anxiety, and mood disorders. Things are changing, and they are changing rapidly. Technological and scientific advances in many areas, including genetics, robotics, and computer technology, are predicted to set about a revolution of change such as humans have never experienced before, change that may well affect what it means to be human-changes that we hope will improve our lives, our societies, and the world.
Ray Kurzweil, a researcher in artificial intelligence, makes the point that knowledge in these areas is increasing at an exponential rate, not at a linear rate.2 This is what you would like your stock price to do. The cla.s.sic example of exponential growth is the story about the smart peasant of whom we learned in math cla.s.s-the guy who worked a deal with a math-challenged king for a grain of rice on the first square of a chessboard, and to have it doubled on the second, and so on, until by the time the king had reached the end of the chessboard, he had lost his kingdom and then some. Across the first row or two of the chessboard things progressed rather slowly, but there came a point where the doubling was a hefty change.
In 1965, Gordon Moore, one of the cofounders of Intel, the world's largest semiconductor manufacturing company, made the observation that the number of transistors on an integrated circuit for minimum component cost doubles every twenty-four months. That means that every twenty-four months they could double the number of transistors on a circuit without increasing the cost. That is exponential growth. Carver Mead, a professor at Caltech, dubbed this observation Moore's law, and it has been viewed both as a prediction and a goal for growth in the technology industry. It continues to be fulfilled. In the last sixty years, computation speed, measured in what are known as floating point operations per second (FLOPS), has increased from 1 FLOPS to over 250 trillion FLOPS! As Henry Markram, project director of IBM's Blue Brain project (which we will talk about later), states, this is "by far the largest man-made growth rate of any kind in the ~10,000 years of human civilization."3 The graph of exponential change, instead of gradually increasing continually as a linear graph would, gradually increases until a critical point is reached and then there is an upturn such that the line becomes almost vertical. This "knee" in the graph is where Kurzweil thinks we currently are in the rate of change that will occur owing to the knowledge gained in these areas. He thinks we are not aware of it or prepared for it because we have been in the more slowly progressing earlier stage of the graph and have been lulled into thinking that the rate of change is linear.
What are the big changes that we aren't prepared for? What do they have to do with the unique qualities of being human? You aren't going to believe them if we don't work up to them slowly, so that is what we are going to do.
SILICON-BASED AIDS: THE COCHLEAR IMPLANT STORY.
Cochlear implants have helped hundreds of thousands of people with severe hearing problems (due to the loss of hair cells in the inner ear, which are responsible for transmitting but also augmenting or decreasing auditory stimuli) for whom a typical hearing aid does not help. In fact, a child who has been born deaf and has the implants placed at an early enough age (eighteen to twenty-four months being optimal) will be able to learn to speak normally, and although his hearing may not be perfect, it will be quite functional. Wonderful as this may sound, in the 1990s, many people in the deaf community worried that cochlear implants might adversely affect deaf culture and that, rather than a therapeutic intervention, the devices were a weapon being wielded by the medical community to commit cultural genocide against the deaf community. Some considered hearing an enhancement, an additional capability on top of what other members of the community had, gained by artificial means. Although people with cochlear implants can still use sign language, apparently they are not always welcome.4 Could this reaction be a manifestation of Richard Wragham's theory, which we learned about in chapter 2, that humans are a party-gang species with in-group/out-group bias? This att.i.tude has slowly been changing but is still held by many.
To understand cochlear implants, and all neuroprosthetics, it is important to also understand that the body runs on electricity. David Bodanis, in his book Electric Universe, gives us a vivid description: "Our entire body operates by electricity. Gnarled living electrical cables extend into the depths of our brains; intense electric and magnetic force fields stretch into our cells, flinging food or neurotransmitters across microscopic barrier membranes; even our DNA is controlled by potent electrical forces."5 A DIGRESSION ON ELECTRIC CITY.
The physiology of the brain and central nervous system has been a challenge to understand. We haven't talked much about physiology, but it is the structure underneath all that occurs in the body and brain. All theories of the brain's mechanisms must have an understanding of the physiology as their foundation. The electrical nature of the body and brain is perhaps most easily digested bit by bit and, luckily for our digestion, the continuing unfolding story began in one of the most tasty cities of the world, Bologna, Italy. In 1791, Luigi Galvani, a physician and physicist, hung a frog's leg out on his iron balcony rail. He had hung it with a copper wire. The dang thing started twitching. Something was going on between those two metals. He zapped another frog's leg with a bit of electricity, and it twitched. After further investigation, he suggested that nerve and muscle could generate their own electrical current, and that was what caused them to twitch. Galvani thought the electricity came from the muscle, but his intellectual sparring partner, physicist Alessandro Volta, who hailed from the southern reaches of Lake Como, was more on the mark, thinking that electricity inside and outside the body was much the same type of electrochemical reaction occurring between metals.
Nearly a hundred years go by, and another physician and physicist, from Germany, Hermann von Helmholtz, who was into everything from visual and auditory perception to chemical thermodynamics and the philosophy of science, figured out a bit more. That electrical current was no by-product of cellular activity; it was what was actually carrying messages along the axon of the nerve cell. He also figured out that even though the speed at which those electrical messages (signals) were conducted was far slower than in a copper wire, the nerve signals maintained their strength, but those in the copper did not. What was going on? Well, in wire, signals are propagated pa.s.sively, so that must not be what is going on with nerve cells. Von Helmholtz found that the signals were being propagated by a wavelike action that went as fast as ninety feet per second. Well, Helmholtz had done his bit and pa.s.sed the problem on.
How did those signals get propagated? Helmholtz's former a.s.sistant, Julius Bernstein, was all over this problem and came up with the membrane theory, published in 1902. Half of it has proven true; the other half, not quite.
When a nerve axon is at rest, there is a 70-millivolt voltage difference between the inside and the outside of the membrane surrounding it, with the inside having a greater negative charge. This voltage difference across the membrane is known as the resting membrane potential.
When you get a blood panel done, part of what is being checked are your electrolyte levels. Electrolytes are electrically charged atoms (ions) of sodium, pota.s.sium, and chlorine. Your cells are sitting in a bath of this stuff, but ions are also inside the cells, and it is the difference in their concentrations inside and outside of the cell that const.i.tutes the voltage difference.
Outside the cell are positively charged sodium ions (atoms that are short an electron) balanced by negatively charged chloride ions (chlorine atoms carrying an extra electron). Inside the cell, there is a lot of protein, which is negatively charged, balanced by positively charged pota.s.sium ions. However, since the inside of the cell has an overall negative charge, not all the protein is being balanced by pota.s.sium. What's up with that? Bernstein flung caution to the wind and suggested that there were selectively permeable pores (now called ion channels), which allowed only pota.s.sium to flow in and out. The pota.s.sium flows out of the cell and remains near the outside of the cell membrane, making it more positively charged, while the excess of negatively charged protein ions make the inside surface of the membrane negatively charged. This creates the voltage difference at rest.
But what happens when the neuron fires off a signal (which is called an action potential)? Bernstein proposed that for a fraction of a second the membrane loses its selective permeability, letting any ion cross it. Ions would then flow into and out of the cell, neutralizing the charge and bringing the resting potential to zero. No big fancy biochemical reactions were needed, just ion concentration gradients. This second part later needed to be tweaked a bit, but first we encounter another physician and scientist, Keith Lucas.
In 1905, Lucas demonstrated that nerve impulses worked on an all-or-none basis. There is a certain threshold of stimulation that is needed for a nerve to respond, and once that threshold is reached, the nerve cell gives its all. It either fires fully, or it does not fire: all or nothin,' baby. Increasing the stimulus does not increase the intensity of the nerve impulse. With one of his students, Baron Edgar Adrian, he discussed trying to record action potentials from nerves, but World War I intervened, and Lucas died in an airplane accident.
Adrian spent World War I treating soldiers for nerve damage and sh.e.l.l shock, and when it ended, he returned to his alma mater, Cambridge, to take over Lucas's lab and study nerve impulses. Adrian set out to record those propagated signals, the action potentials, and in doing so, found out a wealth of information and bagged a n.o.bel Prize along the way.
Adrian found that all action potentials produced by a nerve cell are the same. If the threshold has been reached for generating the signal, it fires with the same intensity, no matter what the location, strength, or duration of the stimulus is. So an action potential is an action potential is an action potential. You've seen one, you've seen them all. Now this was a bit puzzling. If the action potentials were always the same, how could different messages be sent? How were stimuli distinguished? How could you tell the difference between a flaccid and a firm handshake, between a sunny day and a moonlit night, between a dog bark and a dog bite?
Baron Adrian discovered that the frequency of the action potentials is determined by the intensity of the stimulus. If it is a mild stimulus, such as a feather touching your skin, you get only a couple of action potentials, but if it is a hard pinch, you can get hundreds. The duration of a stimulus determines how long the potentials are generated. If, however, the stimulus is constant, although the action potentials remain constant in strength, they gradually reduce in frequency, and the sensation is diminished. And the subject of the stimulus, whether it is perceptual (visual, olfactory, etc.) or motor, is determined by the type of nerve fiber that is stimulated, its pathway, and its final destination in the brain. Adrian also figured out something cool about the somatosensory cortex, the destination of all those perception neurons. Different mammals have different amounts of somatosensory cortex for different perceptions: Different species do not have equal sensory abilities; it all depends on how big an area in their somatosensory cortex is for a specific ability.
This also applies to the motor cortex. Pigs, for instance, have most of their somatosensory cortex dedicated to their snout. Ponies and sheep also have a big nostril area; it is as large as the area for the entire rest of their bodies. Mice have a huge whisker area, and racc.o.o.ns have 60 percent of their neocortex devoted to their fingers and palm. We primates have big hand and face areas, for both sensation and motor movement. You get more bang for your buck when you touch something with your index finger than when you use other parts of your body. This is why when you touch an object with your finger in the dark, you are more likely to be able to determine what it is than if you touch it with your back. It is also why you have such dexterous hands and such an expressive face. However, we will never know what it is like to have the perceptions of a pig. Although the basic physiology is the same, the hookups and the motor and somatosensory areas are different among mammalian species. Part of our unique abilities and experiences, and the uniqueness of every animal species, lies in the makeup of the motor and somatosensory cortex.
Next, Alan Hodgkin, one of Adrian's students, figured out that the current generated by the action potential was more than enough to excite an action potential in the next segment of an axon. Each action potential had more power than it needed to spark the next one. So they could perpetuate themselves forever. This was why, once generated, they didn't lose their strength. Later, Hodgkin and one of his students (are you following the genealogy?), Andrew Huxley, tweaked Bernstein's membrane theory, and also received a n.o.bel Prize for their work. Studying the gigantic squid neuron, the largest of all neurons (picture a strand of spaghettini), they were able to record action potentials from inside and outside the cell. They confirmed the70-millivolt difference that Bernstein had proposed, but found that in the action potential, there was actually a 110-millivolt change, and the inside of the cell ended up with a positive